These are some of my notes from Mikael FernstrÃ¶m‘s lecture at AHO.
The aim of the “Soundobject”:http://www.soundobject.org/ research is to liberate interaction design from visual dominance, to free up our eyes, and to do what small displays don’t do well.
Reasons for focusing on sound:
* Sound is currently under-utilised in interaction design
* Vision is overloaded and our auditory senses are seldom engaged
* In the world we are used to hearing a lot
* Adding sound to existing, optimised visual interfaces does not add much to usability
Sound is very good at attracting our attention, so we have alarms and notification systems that successfully use sound in communication and interaction. We talked about using ‘caller groups’ on mobile phones where people in an address book can be assigned different ringtones, and how effective it was in changing our relationship with our phones. In fact it’s possible to sleep through unimportant calls: our brains are processing and evaluating sound while we sleep.
One fascinating thing that I hadn’t considered is that sound is our fastest sense: it has an extremely high temporal resolution (ten times faster than vision), so for instance our ears can hear pulses at a much higher rate than our eyes can watch a flashing light.
h3. Disadvantages of sound objects
Sound is not good for continuous representation because we cannot shut out sound in the way we can divert our visual attention. It’s also not good for absolute display: pitch, loudness and timbre are relative to most people, even people that have absolute pitch can be affected by contextual sounds. And context is a big issue: loud or quiet environments affect the way that sound must be used in interfaces: libraries and airplanes for example.
There are also big problems with spatial representation in sound, techniques that mimic the position of sound based on binaural differences are inaccessible by about a fifth of the population. This perception of space in sound is also intricately linked with the position and movement of the head. “Some Google searches on spatial representation of sound”:http://www.google.com/search?&q=spatial+representation+of+sound. See also “Psychophysical Scaling of Sonification Mappings [pdf]”:http://sonify.psych.gatech.edu/publications/pdfs/2000ICAD-Scaling-WalkerKramerLane.pdf
‘Filling a bottle with water’ is a sound that could work as part of an interface, representing actions such as downloading, uploading or in replacement of progress bars. The sound can be abstracted into a ‘cartoonification’ that works more effectively: the abstraction separates simulated sounds from everyday sounds.
Mikael cites inspiration from “foley artists”:http://en.wikipedia.org/wiki/Foley_artist working on film sound design, that are experienced in emphasising and simplifying sound actions, and in creating dynamic sound environments, especially in animation.
A side effect of this ‘cartoonification’ is that sounds can be generated in simpler ways: reducing processing and memory overhead in mobile devices. In fact all of the soundobject experiments rely on parametric sound synthesis using “PureData”:http://www.puredata.org/: generated on the fly rather than using sampled sound files, resulting in small, fast, adaptive interface environments (sound files and the PD files used to generate the sounds can be found at the “Soundobject”:http://www.soundobject.org/ site).
One exciting and pragmatic idea that Mikael mentioned was simulating ‘peas in a tin’ to hear how much battery is left in a mobile device. Something that seems quite possible, reduced to mere software, with the accelerometer in the “Nokia 3220”:http://www.nokia.com/phones/3220. Imagine one ‘pea’ rattling about, instead of one ‘bar’ on a visual display…
h3. Research conclusions
The most advanced prototype of a working sound interface was a box that responded to touch, and had invisible soft-buttons on it’s surface that could only be heard through sound. The synthesised sounds responded to the movement of the fingertips across a large touchpad like device (I think it was a “tactex”:http://www.tactex.com/ device). These soft-buttons used a simplified sound model that synthesised _impact_, _friction_ and _deformation_. See “Human-Computer Interaction Design based on Interactive Sonification [pdf]”:http://richie.idc.ul.ie/eoin/research/Actions_And_Agents_04.pdf
The testing involved asking users to feel and hear their way around a number of different patterns of soft-buttons, and to draw the objects they found. See “these slides”:http://www.flickr.com/photos/timo/tags/soundobjects/ for some of the results.
The conclusions were that users were almost as good at using sound interfaces as with normal soft-button interfaces and that auditory displays are certainly a viable option for ubiquitous, especially wearable, computing.
h3. More reading
“Gesture Controlled Audio Systems”:http://www.cost287.org/
Notes on two related projects:
h2. 1. Time that land forgot
* A “project”:http://www.elasticspace.com/timeland/ in collaboration with Even Westvang
* Made in 10 days at the Icelandic locative media workshop, summer 2004
* Had the intention of making photo archives and gps trails more useful/expressive
* Looked at patterns in my photography: 5 months, 8000 photos, visualised them by date / time of day. Fantastic resource for me: late night parties, early morning flights, holidays and the effect of midnight sun is visible.
* Looking now to make it useful as part of more pragmatic interface, to try other approaches less about the abstracted visualisation
* “info, details, research and source code”:http://www.elasticspace.com/2004/07/timeland
* “time visualisation”:http://www.elasticspace.com/images/photomap_times_large.gif
h2. 2. Marking in urban public space
I’ve also been mapping stickering, stencilling and flyposting: walking around with the camera+gps and “photographing examples of marking”:http://www.flickr.com/photos/timo/sets/8380/ (not painted graffiti).
This research looks at the marking of public space by investigating the physical annotation of the city: stickering, stencilling, tagging and flyposting. It attempts to find patterns in this marking practice, looking at visibility, techniques, process, location, content and audience. It proposes ways in which this marking could be a layer between the physical city and digital spatial annotation.
h3. Some attributes of sticker design
* *Visibility*: contrast, monochromatic, patterns, bold shapes, repetition
* *Patina*: history, time, decay, degredation, relevance, filtering, social effects
* *Physicality*: residue of physical objects: interesting because these could easily contain digital info
* *Adaptation and layout*: layout is usually respectful, innovative use of dtp and photocopiers, adaptive use of sticker patina to make new messages on top of old
Layers of information build on top of each other, as with graffiti, stickers show their age through fading and patina, flyposters become unstuck, torn and covered in fresh material. Viewed from a distance the patina is evident, new work tends to respect old, and even commercial flyposting respects existing graffiti work.
Techniques vary from strapping zip-ties through cardboard and around lampposts for large posters, to simple hand-written notes stapled to trees, and short-run printed stickers. One of the most fascinating and interactive techniques is the poster offering strips of tear-off information. These are widely used, even in remote areas.
Initial findings show that stickers don’t relate to local space, that they are less about specific locations than about finding popular locations, “cool neighbourhoods” or just ensuring repeat exposure. This is opposite to my expectations, and perhaps sheds some light on current success/failure of spatial annotation projects.
I am particularly interested in the urban environment as an interface to information and an interaction layer for functionality, using our spatial and navigational senses to access local and situated information.
There is concern that in a dense spatially annotated city we might have an overload of information, what about filtering and fore-grounding of relevant, important information? Given that current technologies have very short ranges (10-30mm), we might be able to use our existing spatial skills to navigate overlapping information. We could shift some of the burden of information retrieval from information architecture to physical space.
I finished by showing this animation by Kriss Salmanis, a young Latvian artist. Amazing re-mediation of urban space through stencilling, animation and photography. (“Un ar reizi naks tas bridis” roughly translates as “And in time the moment will come”.
p(footnote). Graffiti Archaeology, Cassidy Curtis
p(footnote). Street Memes, collaborative project
p(footnote). Spatial annotation projects list
p(footnote). Nokia RFID kit for 5140
p(footnote). Spotcodes, High Energy Magic
p(footnote). ?Mystery Meat navigation?, Vincent Flanders
p(footnote). RDF as barcodes, Chris Heathcote
p(footnote). Implementation: spatial literature
p(footnote). Yellow Arrow
“Even”:http://www.polarfront.org and I presented our “Timeland”:http://www.elasticspace.com/timeland/ project during the 3 day conference and exhibition.
I have made a large “photo set”:http://www.flickr.com/photos/timo/sets/18602/ at Flickr, and we have been using the tag “art+communication”:http://www.flickr.com/photos/tags/artcommunication/ for collaborative documentation.
The highlight of the event was a trip to Limbazi, for the opening of “Piens”:http://locative.x-i.net/piens/info.html the “milk” project, looking at the personal stories around the mapping of milk routes through the EU. It was really good to see GPS being used as a storytelling tool, a way of opening up personal stories in the documentary process.
A big thankyou to the RIXC lot, and everyone involved.
What is a designer: things, places, messages
Models and Constructs
Design Research: Methods and Perspectives
Design Writing Research
The easiest way of linking photos to locations is to combine the time-stamps from both a digital camera and GPS receiver or other location-aware device. If this data is available (over the same period of time) it’s possible to process a series of images and location tracks to stamp each image with location metadata.
Here are a few resources, papers, projects, guidelines and other geo-reference issues.
* “Position-annotated Photographs: The Geotemporal Web”:http://www.dmst.aueb.gr/dds/pubs/jrnl/2003-PC-GTWeb/html/gtweb.html
* “GEOREP: Digital Library for Spatial Data”:http://www.dlib.org/dlib/december96/canada/12proulx.html
* “Geographic location tags on digital images, Microsoft [pdf]”:http://wwmx.org/docs/wwmx_acm2003.pdf
* “Portable digital photo album [time based interface]”:http://www.ece.ubc.ca/~elec418/project/project2handedin/nsiu/prototype.html
h3. Prior work
* “Tokyo Picturesque”:http://www.downgoesthesystem.com/devzone/exiftest/final/ “[Details]”:http://www.downgoesthesystem.com/devzone/exiftest/details/
* “Habitat Perspectives”:http://www.marumushi.com/apps/perspectives/
* “Photo Location”:http://www.986.org/sites/ghogh/CDC/CDC_5505.html “[Details]”:http://www.986.org/sites/ghogh/CDC/CDC_metadata.html
* “Geo Snapper”:http://www.geosnapper.com/index.php
* “WWMX web demo”:http://www.wwmx.org/WebClient.aspx
* “Good list of other photo mapping projects”:http://transmutable.com/PhotoMaps/
h3. Geo-referencing Photos
These are some commercial applications and scripts that link photographs to geographic information.
* “93 Photo Street”:http://transmutable.com/93PhotoStreet/
* “Media Mapper”:http://www.redhensystems.com/products/
* “GPS photo link”:http://www.geospatialexperts.com/
* “GPS TrackMaker”:http://www.gpstm.com/eng/screens_eng.htm
* “WWMX Travelogue application “:http://www.wwmx.org/Download.aspx/
* “AkuAku: GPS tagged jpegs”:http://akuaku.org/archives/2003/05/gps_tagged_jpeg.shtml
* “Adding GPS Information to EXIF Images with Photostudio”:http://www.stuffware.co.uk/articles/00000001.html
* “GPS Photo Linking in iViewMedia Pro [Mac]”:http://www.macdevcenter.com/pub/a/mac/2004/06/15/gps_photo.html
* “GPS plotting in Flash”:http://www.marcosweskamp.com/components/tokcomponents/geoplotter/demo.html
h3. GPS track and waypoint extraction
Transferring data from GPS devices can be problematic. If this is going to work in a wider, collaborative context there is a need to make guidelines for this process. It is also really important to make sure units and timezones are correctly set up on all software, so that no translation happens as the data is converted. Exported data also tends to be messy, with mixed tracklogs and waypoints, which for us meant a lot of hand-tweaking.
* “Garmin Mapsource”:http://www.garmin.com/cartography/
* “MacGPS Pro”:http://www.macgpspro.com/
* “GPS Babel”:http://gpsbabel.sourceforge.net/
* “GPSylon tool for downloading/viewing GPS data”:http://gpsmap.sourceforge.net/
* “GPS to GEO-RDF”:http://www.hackdiary.com/archives/000040.html
* “Some notes on coordinate translation”:http://life.csu.edu.au/geo/dms.html
h3. Extracting EXIF data
To get a handle on the photographic data we need to look at the embedded EXIF information, which contains things like capture date, time, exposure and aperture.
* “Python Exif Parser”:http://pyexif.sourceforge.net/
* “Media Metadata for Python”:http://sourceforge.net/projects/mmpython/
* “Extracting EXIF data with Python”:http://simon.incutio.com/archive/2003/11/13/exif
* “Geo tagging images: Exif GPS with python and java”:http://kennethhunt.com/archives/000935.html
* “EXIF metadata extraction in java”:http://www.drewnoakes.com/code/exif/
h3. Content metadata guidelines
In order to standardise the sharing of geographic information (tracklogs and waypoints) we need to think carefully about the formats used. We initially intended to use locative packets, but have ended up using GPX format alongside some custom XML for time and photo information.
* “Locative packets”:http://locative.net/workshop/index.cgi?Locative_Packets
* “Other recommended vocabularies”:http://locative.net/workshop/index.cgi?Recommended_Vocabularies
* “GPX namespace manual”:http://www.topografix.com/gpx_manual.asp
* “JPEG RDF strategy for storing location info”:http://nwalsh.com/java/jpegrdf/jpegrdf.html
* “W3 RDF geo-vocabulary”:http://www.w3.org/2003/01/geo/
* “Describing and retrieving photos using RDF and HTTP”:http://www.w3.org/TR/photo-rdf/
* “Exif vocabulary workspace – RDF Schema”:http://www.w3.org/2003/12/exif/
* “Vocabularies for w3photo project”:http://esw.w3.org/topic/W3PhotoVocabs
Update: “new website”:http://www.futurefarmers.com/survey/outskirts.html