« Seeing the Light | Main | CatCam and Other Pet Projects »

Friday, 08 June 2007

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00df351e888f883400df352081eb8833

Listed below are links to weblogs that reference Photography in the Metaverse:

Comments

Wow, I have to thank you for posting that link. I'm nearly speechless. I have to wonder though, is this somehow related to the new "Street View" on Google maps? I can't wait for the day when I no longer have to use a map to find a restaurant, I can just view the route ahead of time as if I'm walking it and memorize the landmarks!

Hmmm. It seems like a nice image viewer, but, having watched the video and browsed the site, I'm not getting as excited as they hype clearly wants me to be. I mean, it's an image viewer with smooth zooming.

I think flickr and related sites with tagging and social-network based organization are a much bigger information revolution for photography.

It's a powerful idea, and it's going to drive the need for more powerful graphics cards and higher RAM. I still don't use Metadata or tagging to any great extent, so perhaps I'm not as impressed as some, but if it has the ability to make Google image searches more meaningful, I'm for it.

I think the posters so far are missing the potential of this kind of thing - seeing this just inside the existing boundaries. There is great potential here and in other developing areas to expand our visual photographic concepts beyond what we are used to and what we expect.

Ctein - combine it with Jeff Han's multi-touch screen and you can have even more fun...
http://www.ted.com/index.php/talks/view/id/65

I was just discussing Photosynth this morning at work. We store hundreds of thousands of commercial images for customers of ours and having ways to navigate it easily is very important. I personally have over 30,000 photos on my computer and navigating them is a pain. Tagging and searching is great, but it's hard to get people to do well (myself included). Nothing beats an intuitive and lightning fast visual interface.

Now when you think about combining Seadragon/Photosynth and Microsoft's new surface computing technology, it's easy to get really excited about where this is going.

Check out this surface computing demo:
http://link.brightcove.com/services/link/bcpid271552687/bctid933742930

Full disclosure: I work for the Borg, although not in this team. The point is that this is another layer on top of the social networking/folksonomy stuff -- that I can look at one of Ctein's photos, which might link me to someone else's photo of the same area in a different season, which might link me to an article in wikipedia on the species of tree in the photo.

Said another way -- you and I may not have used the same tags to describe a photo of the same area, but because photosynth recognizes that they are photos of the same thing, someone looking at my photo can get some of your tagging goodness, and vice versa.

Good post.

I guess from the oohs and aahs at the conference that this is new technology.

I like the thought of reading the Guardian this way. I wonder whether one can search though? As the content is 'images', I wonder whether one could I search for a particlar word or phrase?

On the question of accuracy, the mapping of Notre Dame from many images is one thing - but I would like to see mapping of 'things' about which we believe we have accurate 'maps' in our heads - such as famous faces.

It would be interesting to see what a composite of George W Bush is like for example. And what about 'things' that change over time - again, thinking of people's faces?

Perhaps one could slice groups of images by capture date and build a movie of them getting older?

David

I don't think this is especially original, at least the trial Photosynth application. It's shocking, as so many applications are these days. It could also be a lot more shocking but hardly original if, for example, every view was a pair of two displaced frames and the result could be watched through stereoscopic goggles, that would be impressive (the concept is quite old though).

I have mixed feelings about this. I see the great potential in this when it comes to linking photos from a lot of different sources - the social networking / folksonomy angle. I also admire the speed of zooming and the way a 3D space can be constructed automagically.
However, something deep inside me reacts to the beautiful Piazza San Marco represented as a cloud of white dots. This, to me, is a sort of radicalized version of Flickr: as a tool for building communities and relationships its nice, but from the point of view of aesthetics it sucks. The user interface is ugly and the whole concept encourages snapshot photography – the number of pictures becomes the wow-factor, more than the quality of each individual photo.
But that said, I'm still a bit fascinated...

It's a big deal, if it works. The problem with tagging is that there's no incentive for most people who post images to tag them: almost all of the benefit goes to the searcher rather than the tagger. So almost no images on the Internet are usefully tagged, so image search is too ineffective to be useful for much besides entertainment and for use other than in specialized communities like Flickr.

Photosynth, if it works, blows all of that away. Flickr becomes obsolete. Maybe a lot of stock-photo sites become obsolete. High-quality photos on obscure personal websites and blogs become more valuable because now anyone can search for them.

It will be interesting to learn how effective this software really is. If it's effective, a lot of things we take for granted will change.

Even if it's not as good as the hype, it seems likely that someone will develop effective software of this type eventually.

Finally a visual representation of the semantic web idea almost anyone can understand. It is quite simply impressive even though it only hints at what else can be done.

Maybe a lot of stock-photo sites become obsolete. High-quality photos on obscure personal websites and blogs become more valuable because now anyone can search for them.

Or from a different angle, anyone can search for them and lift them. Photostealing is already enough of a problem on the web -- this is just going to make it worse.

"Or from a different angle, anyone can search for them and lift them. Photostealing is already enough of a problem on the web -- this is just going to make it worse."

Isn't that a sort of glass-half-empty, man-the-barricades, status quo way of looking at it?

The old models don't and won't work any more (they don't already when Bruce Davidson loses a major assignment to a girl on Flickr), so why stick with them instead of finding new models?

Wait until what's on *that* video gets matched up with this:

http://link.brightcove.com/services/player/bcpid932579976?bclid=932553050&bctid=933742930

I wonder if this will become an excuse for our government to censor our photos however they please because a shot may contribute to an elucidation of a supposed security risk.

Sorry - your shot of Gramma must disappear because there is a secret military installation 1200 yards away over her left shoulder.

Hi,

yes that is pretty impressive! I saw a talk by Richard Hartly at the ICPR in August 2006 who demoed the system (I think he even used the Notre Dame dataset) ... it was an academic project then, so I wonder whether they bought it or put it together independently. I couldn't find any link between them, and they have some pretty smart people at coorporate resaerch.

Well, anyway I think that puts a lot of pressure on the google/mac people, since there are decades of research by lots of groups in this technology, and now ms seems to own it(?) If you know any details, that would be interesting!

Reminds me of the scene in Blade Runner where he's searching through a photograph, into other rooms and around corners.

The comments to this entry are closed.