Home | Projects | News and Events | About | Contact

Oct. 27, 2013

Augmented Reality: Theory and Practice
By John Leigh

First set of Google Image Search results for "Augmented Reality." Note preponderance of phones and tablets.

Augmented reality is about real-time mediation between physical objects and a data background.  There have been representations of it in science fiction for a while now (possibly still the most iconic is the "terminator vision" from the 1984 movie, for which, naturally, there is now an app; more recently, Charles Stross's 2007 novel Halting State offers a really compelling and comprehensive idea of what we might plausibly see in the near future, or, at this point, the recent past) and as existing technology has advanced and trends become more clear (while at the same time arguably being driven by science fictional representations) the visions have sort of converged, so that at the moment we are living, in an unevenly distributed way, on the periphery of a future where everything's metadata is potentially floating above it on a virtual text panel, provided one has the equipment to see it and that things are working properly (there may also be ads).  In this note, I'm hoping to give an IT person's view of how to think productively about AR in the museum space, where to look for more information, expected near term developments, and what you can do right now to augment the reality in your own neighborhood.

As a method of turning the museum into a ubiquitous presence, AR holds a lot of promise.  Any given physical space or object can have layers of information attached to it, or layers of interaction or interpretation, waiting to be made explicit by an interposed device (at the moment the interposing devices are mostly phones or tablets, but one expects that head-mounted devices like Google Glass and its inevitable clones will pretty soon become the standard). The challenge, of course, is setting everything up.

SpotCrime AR overlay, done in Layar.  Photo by Flickr user Vin Crosbie, used under Creative Commons license.

But before we get to the setup, let's think a bit about what we really have to work with, and what we might want to do with it.  Gartner's 2013 Hype Cycle places AR as just falling into the "Trough of Disillusionment," and places the advent of its general utility (arrival at the "Plateau of Productivity" in Gartner's parlance) as five to ten years on the horizon.  In practice, this means that people are probably going to regard in-museum phone-based tag implementations as de rigeur and a bit tiresome, starting about seven months ago (which is not to say that there might not still be some utility in this approach, but no one is going to get particularly excited just by the technology involved).

The most exciting aspect of AR is the possibility of superimposing the sort of ghostly virtual over the physically real.  When this works, it brings out the inherent weirdness of how much information and history is naturally attached to pretty much any object or place. 

Current AR implementations generally work based either on location (which at its current level of granularity seems to work best for building-sized objects) or image recognition (which has been the basis for most exhibition-related AR apps that we have seen so far).  The second category might reasonably be expanded to include things like interactive table apps that incorporate tagged physical objects.  And then, less categorizable, but still somewhat within the AR umbrella, we have VR apps with a lot of positionality and tactile interaction (like this amazing guillotine simulator for Oculus Rift).
There are a lot of AR frameworks and SDKs floating around right now.  Of these, the biggest are Wikitude, Layar, and Junaio (although there is a case to be made that these are not "true" AR in the sense that pioneers in the field had in mind), and the most generally useful seems to be ARIS.  I'm calling particular attention to ARIS because it doesn't require any programming knowledge, and it can do location-based media overlays on iOS devices through a free app.  And its editor is web/Flash based, which is also extremely convenient.  If you're reading this in a modern browser, you could literally open a new tab right now and make a really media-rich location-based quest exercise set anywhere on earth, release it, and have it played by anyone with an iOS device and an internet connection.

Some other implementations worth considering do not revolve around specific AR SDKs, but instead work with more basic sensor and display technologies.  With regard to sensors, the Kinect (and, now that Microsoft has released the SDK, the next-generation Kinect that's included with the XBox One) seems like an especially useful place to start, since it integrates a lot of sensing and image/shape recognition capability at a pretty low level.  A static display might be a transparent LCD overlooking a point of interest, or just a big screen showing the sensor's output and integrated into a physical window or viewport. 

In the current moment, I would suggest that a phone app using one of the existing AR frameworks/SDKs is most likely to be the best choice for an out-of-museum experience (whether that is some kind of narrative or emergent interactive/game, or simply an informational overlay on a neighborhood or landscape) but that any onsite project should probably use custom programming and hardware in order to be really transparent and engaging.

Omnimuseum™ ©2012 - 2017. All Rights Reserved. The materials on this site are copyright protected by their authors.