bq. AR in unknown scenes is always going to be difficult without a remote expert to annotate the map. Here, we restrict ourselves to finding a dominant plane in the scene, and then running simple VR/AR games on this plane: essentially, you can have little AR critters running around on your tabletop. At present, no attempt is made to exploit the map to e.g. find occluding geometry; this is an area of future work. (From Georg Klein).
I love how it goes in and out of register, and how it ‘picks up’ the registration from an initial set of objects. People will end up intuiting that AR works in certain ways “not around trees” for instance, or only in “static scenes”.
Notes on two related projects:
h2. 1. Time that land forgot
* A “project”:http://www.elasticspace.com/timeland/ in collaboration with Even Westvang
* Made in 10 days at the Icelandic locative media workshop, summer 2004
* Had the intention of making photo archives and gps trails more useful/expressive
* Looked at patterns in my photography: 5 months, 8000 photos, visualised them by date / time of day. Fantastic resource for me: late night parties, early morning flights, holidays and the effect of midnight sun is visible.
* Looking now to make it useful as part of more pragmatic interface, to try other approaches less about the abstracted visualisation
* “info, details, research and source code”:http://www.elasticspace.com/2004/07/timeland
* “time visualisation”:http://www.elasticspace.com/images/photomap_times_large.gif
h2. 2. Marking in urban public space
I’ve also been mapping stickering, stencilling and flyposting: walking around with the camera+gps and “photographing examples of marking”:http://www.flickr.com/photos/timo/sets/8380/ (not painted graffiti).
This research looks at the marking of public space by investigating the physical annotation of the city: stickering, stencilling, tagging and flyposting. It attempts to find patterns in this marking practice, looking at visibility, techniques, process, location, content and audience. It proposes ways in which this marking could be a layer between the physical city and digital spatial annotation.
h3. Some attributes of sticker design
* *Visibility*: contrast, monochromatic, patterns, bold shapes, repetition
* *Patina*: history, time, decay, degredation, relevance, filtering, social effects
* *Physicality*: residue of physical objects: interesting because these could easily contain digital info
* *Adaptation and layout*: layout is usually respectful, innovative use of dtp and photocopiers, adaptive use of sticker patina to make new messages on top of old
Layers of information build on top of each other, as with graffiti, stickers show their age through fading and patina, flyposters become unstuck, torn and covered in fresh material. Viewed from a distance the patina is evident, new work tends to respect old, and even commercial flyposting respects existing graffiti work.
Techniques vary from strapping zip-ties through cardboard and around lampposts for large posters, to simple hand-written notes stapled to trees, and short-run printed stickers. One of the most fascinating and interactive techniques is the poster offering strips of tear-off information. These are widely used, even in remote areas.
Initial findings show that stickers don’t relate to local space, that they are less about specific locations than about finding popular locations, “cool neighbourhoods” or just ensuring repeat exposure. This is opposite to my expectations, and perhaps sheds some light on current success/failure of spatial annotation projects.
I am particularly interested in the urban environment as an interface to information and an interaction layer for functionality, using our spatial and navigational senses to access local and situated information.
There is concern that in a dense spatially annotated city we might have an overload of information, what about filtering and fore-grounding of relevant, important information? Given that current technologies have very short ranges (10-30mm), we might be able to use our existing spatial skills to navigate overlapping information. We could shift some of the burden of information retrieval from information architecture to physical space.
I finished by showing this animation by Kriss Salmanis, a young Latvian artist. Amazing re-mediation of urban space through stencilling, animation and photography. (“Un ar reizi naks tas bridis” roughly translates as “And in time the moment will come”.
p(footnote). Graffiti Archaeology, Cassidy Curtis
p(footnote). Street Memes, collaborative project
p(footnote). Spatial annotation projects list
p(footnote). Nokia RFID kit for 5140
p(footnote). Spotcodes, High Energy Magic
p(footnote). ?Mystery Meat navigation?, Vincent Flanders
p(footnote). RDF as barcodes, Chris Heathcote
p(footnote). Implementation: spatial literature
p(footnote). Yellow Arrow
“Even”:http://www.polarfront.org and I presented our “Timeland”:http://www.elasticspace.com/timeland/ project during the 3 day conference and exhibition.
I have made a large “photo set”:http://www.flickr.com/photos/timo/sets/18602/ at Flickr, and we have been using the tag “art+communication”:http://www.flickr.com/photos/tags/artcommunication/ for collaborative documentation.
The highlight of the event was a trip to Limbazi, for the opening of “Piens”:http://locative.x-i.net/piens/info.html the “milk” project, looking at the personal stories around the mapping of milk routes through the EU. It was really good to see GPS being used as a storytelling tool, a way of opening up personal stories in the documentary process.
A big thankyou to the RIXC lot, and everyone involved.
There’s a really good “writeup of the installations and artwork at Grandtextauto”:http://grandtextauto.gatech.edu/2004/09/05/isea-2004-art-report.
There are two versions: a “low-bandwidth”:/timeland/noimages.html no-image version and a “high-bandwidth”:/timeland/ version with images. There is also a “Quicktime movie”:http://polarfront.org/time_land_forgot.mov for people that can’t run Flash at a reasonable frame rate.
We have made the “source code”:http://www.polarfront.org/timeland.zip (.zip file) available for people that want to play with it, under a General Public License (GPL).
h2. Background: Narrative images and GPS tracks
Over the last five years Timo has been photographing daily experience using a digital camera and archiving thousands of images by date and time. Transient, ephemeral and numerous; these images have become a sequential narrative beyond the photographic frame. They sit somewhere between photography and film, with less emphasis on the single image in re-presenting experience.
For the duration of the workshop Timo used a GPS receiver to record tracklogs, capturing geographic co-ordinates for every part of the journey. It is this data that we explore here, using it to provide a history and context to the images.
This project is particularly relevant as mobile phones start to integrate location-aware technology and as cameraphone image-making becomes ubiquitous.
We discussed the context in which we were creating an application: who would use it, and what would they be using it for? In our case, Timo is using the photographs as a personal diary, and this is the first scenario: a personal life-log, where visualisations help to recollect events, time-periods and patterns.
Then there is the close network of friends and family, or participants in the same journey, who are likely to invest time looking at the system and finding their own perspective within it. Beyond that there is a wider audience interested in images and information about places, that might want a richer understanding of places they have never been, or places that they have experienced from a different perspective.
Images are immediately useful and communicative for all sorts of audiences, it was less clear how we should use the geographic information, the GPS tracks might only be interesting to people that actually participated in that particular journey or event.
We looked at existing photo-mapping work, discovering a lot of projects that attempted to give images context by placing them within a map. But these visualisations and interfaces seemed to foreground the map over the images and photos embedded in maps get lost by layering. The problem was most dramatic with topographic or street maps full of superfluous detail, detracting from the immediate experience of the image.
Even the exhaustive and useful research from Microsoft’s “World Wide Media Index (WWMX)”:http://wwmx.org/ arrives at a somewhat unsatisfactory visual interface. The paper details five interesting mapping alternatives, and settles on a solution that averages the number of photos in any particular area, giving it a representatively scaled ‘blob’ on a street map (see below). Although this might solve some problems with massive data-sets, it seems a rather clunky interface solution, overlooking something that is potentially beautiful and communicative in itself.
p(caption). See “http://wwmx.org/docs/wwmx_acm2003.pdf”:http://wwmx.org/docs/wwmx_acm2003.pdf page 8
Other examples (below) show other mapping solutions; Geophotoblog pins images to locations, but staggers them in time to avoid layering, an architectural map from Pariser Platz, Berlin gives an indication of direction, and an aerial photo is used as context for user-submitted photos at Tokyo-picturesque. There are more examples of prior work, papers and technologies “here”:http://www.elasticspace.com/index.php?id=44.
p(caption). Image from “Pariser Platz Berlin”:http://www.fes.uwaterloo.ca/u/tseebohm/berlin/map-whole.html
p(caption). Image from “geophotoblog”:http://brainoff.com/geophotoblog/plot/
p(caption). Image from “Tokyo Picturesque”:http://www.downgoesthesystem.com/devzone/exiftest/final/
By shifting the emphasis to location the aspect most clearly lacking in these representations is _time_ and thereby also the context in which the images can most easily form narrative to the viewer. These images are subordinate to the map, thereby removing the instant expressivity of the image.
We feel that these orderings make spatially annotated images a weaker proposition than simple sequential images in terms of telling the story of the photographer. This is very much a problem of the seemingly objective space as contained by the GPS coordinates versus the subjective place of actual experience.
h2. Using GPS Data
We started our technical research by looking at the data that is available to us, discovering data implicit in the GPS tracks that could be useful in terms of context, many of which are seldom exposed:
* speed in 3 dimensions
* time of day
* time of year
With a little processing, and a little extra data we can find:
* acceleration in 3 dimensions
* change in heading
* mode of transportation (roughly)
* nearest landmark or town
* actual (recorded) temperature and weather
* many other possibilities based on local, syndicated data
Would it be interesting to use acceleration as a way of looking at photos? We would be able to select arrivals and departures by choosing images that were taken at moments of greatest acceleration or deceleration. Would these images be the equivalent of ‘establishing’, ‘resolution’ or ‘transition’ shots in film, generating a good narrative frame for a story?
Would looking at photos by a specific time of day give good indication of patterns and habits of daily life? The superimposition of daily unfolding trails of an habitual office dwelling creature might show interesting departures from rote behaviour.
h2. Using photo data
By analysing and visualising image metadata we wanted to look for ways of increasing the expressive qualities of a image library. Almost all digital images are saved with the date and time of capture but we also found unexplored tags in the EXIF data that accompany digital images:
* focus distance
* focal length
* white balance
We analysed metadata from almost 7000 photographs taken between 18 February – 26 July 2004 to see patterns that we might be able to exploit for new interfaces. We specifically looked for patterns that helped identify changes over the course of the day.
p(caption). Shutter, Aperture, Focal length and File size against time of day (click for larger version)
This shows an increase in shutter speed and aperture during the middle of the day. The images also become sharper during daylight hours, indicated by an increased file-size.
p(caption). Date against time of day (click for larger version)
This shows definite patterns: holidays and travels are clearly visible (three horizontal clusters towards the top) as are late night parties and early morning flights. This gives us huge potential for navigation and interface. Image-based ‘life-log’ applications like “Flickr”:http://www.flickr.com and “Lifeblog”:http://www.nokia.com/lifeblog are appearing, the visualisation of this light-weight metadata will be invaluable for re-presenting and navigating large photographic archives like these.
Matias Arje – also at the Iceland workshop – has done “valuable work”:http://arje.net/locative/ in this direction.
Getting at the GPS and EXIF data was fairly trivial though it did demand some testing and swearing.
We are both based on Apple OS X systems, and we had to borrow a PC to get the tracklogs reliably out of the Timo’s GPS and into Garmin’s Mapsource. We decided to use GPX as our format for the GPS tracks, GPSBabel happily created this data from the original Garmin files.
The EXIF was parsed out of the images by a few lines of Python using the EXIF.py module and turned into another XML file containing image file name and timestamp.
We chose Flash as the container for the front end, it is ubiquitous and Even’s programming poison of choice for visualisation. Flash reads both the GPX and EXIF XML files and generates the display in real-time.
More on our choices of technologies “here”:http://www.elasticspace.com/2004/07/geo-referenced-photography.
h2. First prototype
Mirroring Timo’s photography and documentation effort, Even has invested serious time and thought in “dynamic continous interfaces”:http://www.polarfront.org. The first prototype is a linear experience of a journey, suitable for a gallery or screening, where images are overlaid into textural clusters of experience. It shows a scaling representation of the travel route based on the distance covered the last 20-30 minutes. Images recede in scale and importance as they move back in time. Each tick represents 1 minute, every red tick represents an hour.
We chose to create a balance of representation in the interface around a set of prerogatives: first image (for expressivity), then time (for narrative), then location (for spatialising, and commenting on, image and time).
In making these interfaces there is the problem of scale. The GPS data itself has a resolution down to a few meters, but the range of speeds a person can travel at varies wildly through different modes of transportation. The interface therefore had to take into account the temporo-spatial scope of the data and scale the resolution of display accordingly.
This was solved by creating a ‘camera’ connected to a spring system that attempts to center the image on the advancing ‘now’ while keeping a recent history of 20 points points in view. The parser for the GPS tracks discards the positional data between the minutes and the animation is driven forward by every new ‘minute’ we find in the track and that is inserted into the view of the camera. This animation system can both be used to generate animations and interactive views of the data set.
There are some issues with this strategy. There will be discontinuities in the tracklogs as the GPS is switched off during standstill and nights. Currently the system smoothes tracklog time to make breaks seem more like quick transitions.
The system should ideally maintain a ‘subjective feeling’ of time adjusted to picture taking and movement; a temporal scaling as well as a spatial scaling. This would be an analog to our own remembering of events: minute memories from double loop roller-coasters, smudged holes of memory from sleepy nights.
Most of the tweaking in the animation system went into refining the extents system around the camera history & zoom, acceleration and friction of spring systems and the ratio between insertion of new points and animation ticks.
In terms of processing speed this interface should ideally have been built in Java or as a stand alone application, though tests have shown that Flash is able to parse a 6000 point tracklog, and draw it on screen along with 400 medium resolution images. Once the images and points have been drawn on the canvas they animate with reasonable speed on mid-spec hardware.
This prototype has proved that many technical challenges are solvable, and given us a working space to develop more visualisations, and interactive environments, using this as a tool for thinking about wider design issues in geo-referenced photography. We are really excited by the sense of ‘groundedness’ the visualisation gives over the images, and the way in which spatial relationships develop between images.
For Timo it has given a new sense of spatiality to image making, the images are no longer locked into a simple sequential narrative, but affected by spatial differences like location and speed. He is now experimenting with more ambient recording: taking a photo exactly every 20 minutes for example, in an effort to affect the presentation.
Another strand of ideas we explored was using the metaphor of a 16mm “Steenbeck”:http://www.harvardfilmarchive.org/gallery/images/conservation_steenbeck.jpg edit deck: scrubbing 16mm film through the playhead and watching the resulting sound and image come together: we could use the scrubbing of an image timeline, to control all of the other metadata, and give real control to the user. It would be exciting to explore a spatial timeline of images, correlated with contextual data like the GPS tracks.
We need to overcome the difficulty obtaining quality data, especially if we expect this to work in an urban environment. GPS is not passive, and “requires a lot of attention to record tracks”:http://www.elasticspace.com/index.php?id=4. Overall our representation doesn’t require location accuracy, just consistency and ubiquity of data; we hope that something like cell-based tracking on a mobile phone becomes more ubiquitous and usable.
We would like to experiment further with the extracted image metadata. For large-scale overviews, images could be replaced by a simple rectangular proxy, coloured by the average hue of the original picture and taking brightness (EV) from exposure and aperture readings. This would show the actual brightness recorded by the camera’s light meter, instead of the brightness of the image.
Imagine a series of images from bright green vacation days, dark grey winter mornings or blue Icelandic glaciers, combined with the clusters and patterns that time-based visualisation offers.
We would like to extend the data sets to include other people: from teenagers using gps camera phones in Japan to photojournalists. How would visualisations differ, and are there variables that we can pre-set for different uses? And how would the map look with multiple trails to follow, as a collaboration between multiple people and multiple perspectives?
At a technical level it would be good to have more integration with developing standards: we would like to use “Locative packets”:http://locative.net/workshop/index.cgi?Locative_Packets, just need more time and reference material. This would make it useful as a visualisation tool for other projects, “Aware”:http://aware.uiah.fi/ for example.
We hope that the system will be used to present work from other workshops, and that an interactive installation of the piece can be set up at “Art+Communication”:http://rixc.lv/04/.
Even Westvang works between interaction design, research and artistic practice. Recent work includes a slowly growing digital organism that roams the LAN of a Norwegian secondary school and an interactive installation for the University of Oslo looking at immersion, interaction and narrative. Even lives and works in Oslo. His musings live on “polarfront.org”:http://www.polarfront.org and some of his work can be seen at “bengler.no”:http://www.bengler.no.
Timo Arnall is an interaction designer and researcher working in London, Oslo and Helsinki. Recent design projects include a social networking application, an MMS based interactive television show and a large media archiving project. Current research directions explore mapping, photography and marking in public places. Work and research can be seen at “elasticspace.com”:http://www.elasticspace.com.
Exploring the space of narrative, images and personal geography. For three months I recorded every walk, drive, train journey and flight I took, while photographing spaces and places from daily life.
The project is the first step towards a visual language for spatially located imagery, looking at ways in which personal travelogues can become useful as communication and artefacts of personal memory.
Nine boards, four images each, sit above maps that provide spatial context. Each image is captioned with location information and a key linking it to a point on the map below. The images show spatial transition from one country to another, and a change of season.
The maps are GPS tracks, visualised as simple lines. The scale of the map is decided by the extents of the image locations. This effectively shows a transition from London to Oslo, over the period of a few months. The maps give an interesting sense of transition, scale and movement are emphasised.
p(caption). All maps in sequence (click for full size image)
p(caption). All images in sequence
p(caption). Images (detail)
p(caption). Maps (detail)
h3. About the exhibition
AFAR is an exhibition where 25 international artists have been asked to produce work in accordance with the word ‘afar’. The initial intention was to establish a connection between diverse artistic and creative forms that the invited originate from: architecture, dance, street art, design, audio, photography, VJ?ing, video art, fashion design, painting and creative writing.
The exhibition was in R?huset, Copenhagen, Denmark, from 8 – 23 July 2004.
h3. Some images