Image Mapping and Tracking for AR Apps

http://youtu.be/434zw201iN8

I’m interested in the ways that augmented reality can be used to extend storytelling and interaction into the real world. The literary and gameplay potentials presented by this nascent technology seem limitless. That said, we have a fair distance to go before our world starts looking like the one depicted in Denno Coil. One of the biggest stumbling blocks I’ve encountered is the issue of precise positioning. Without knowing the user’s exact location and orientation, an AR system can’t properly overlay/position objects. Most of the AR apps we’ve seen thus far depend on glyphs to accomplish this task; others use carefully pre-positioned wifi routers or Bluetooth nodes to triangulate the user’s location. The problem with these solutions is that — while they make for decent demos — they don’t really scale. If we’re going to tell stories using AR, I suspect that we’ll be looking for solutions that break free of the need for pre-set glyphs, routers or other equipment.

This is where image recognition comes in. Projects like Microsoft’s Photosynth illustrate the capacity of image databases to define 3D space. More recently, AR researchers have started to use image recognition/mapping metaphors to create fluid “glyph-free” applications. The team at the University of Graz’s Christian Doppler Laboratory have just posted some exciting new videos of their work in this field.

These videos hint at the kind of seamlessness of interaction we can expect from AR in the near future.

More: Handheld Augmented Reality at the Christian Doppler Laboratory

This entry was posted in Blog and tagged , , , , . Bookmark the permalink.