Using location data to predict where people will be, when they will be there, and who they will be there with

Never mind the increasingly ubiquitous surveillance-by-smartphone of where people are. Next up is keeping track of where they will be. University of Illinois researchers Long Vu, Quang Do, and Klara Nahrstedt have prototyped a system that analyzes the movements of people on the U of Illinois campus, then makes predictions about their future movements and social contacts:

The constructed model is able to answer three fundamental questions: (1) where the person will stay, (2) how long she will stay at the location, and (3) who she will meet.

In order to construct the predictive model, Jyotish includes an efficient clustering algorithm to cluster Wifi access point information in the Wifi trace into locations. Then, we construct a Naive Bayesian classifier to assign these locations to records in the Bluetooth trace and obtain a fine granularity of people movement. Next, the fine grain movement trace is used to construct the predictive model including location predictor, stay duration predictor, and contact predictor to provide answers for three questions above. Finally, we evaluate the constructed predictive model over the real Wifi/Bluetooth trace collected by 50 participants in University of Illinois campus from March to August 2010. Evaluation results show that Jyotish successfully constructs a predictive model, which provides a considerably high prediction accuracy of people movement. (ScienceDirect)

Full paper here.

Via New Scientist.

Ambient storytelling resources

This post contains starting points for researching and developing “ambient” storytelling and interaction systems (i.e., stories or games that take place in the background, rather than traditional attention-focusing media artifacts such as movies or console video games). These trailheads and links are particularly useful for anyone interested in designing activities that engage with the existing flows in player-participants’ lives.

Precedents and origins


Continue reading

Image Mapping and Tracking for AR Apps

I’m interested in the ways that augmented reality can be used to extend storytelling and interaction into the real world. The literary and gameplay potentials presented by this nascent technology seem limitless. That said, we have a fair distance to go before our world starts looking like the one depicted in Denno Coil. One of the biggest stumbling blocks I’ve encountered is the issue of precise positioning. Without knowing the user’s exact location and orientation, an AR system can’t properly overlay/position objects. Most of the AR apps we’ve seen thus far depend on glyphs to accomplish this task; others use carefully pre-positioned wifi routers or Bluetooth nodes to triangulate the user’s location. The problem with these solutions is that — while they make for decent demos — they don’t really scale. If we’re going to tell stories using AR, I suspect that we’ll be looking for solutions that break free of the need for pre-set glyphs, routers or other equipment.

This is where image recognition comes in. Projects like Microsoft’s Photosynth illustrate the capacity of image databases to define 3D space. More recently, AR researchers have started to use image recognition/mapping metaphors to create fluid “glyph-free” applications. The team at the University of Graz’s Christian Doppler Laboratory have just posted some exciting new videos of their work in this field.

These videos hint at the kind of seamlessness of interaction we can expect from AR in the near future.

More: Handheld Augmented Reality at the Christian Doppler Laboratory