Category: Mobility
A phone to save us from our screens?
Microsoft has two new ads, anticipating their upcoming Windows Phone 7 launch. The first is an almost post-apocalyptic vision of humanity stuck with their heads in their mobile devices:
Here’s David Webster, chief strategy officer in Microsoft’s central marketing group, explaining their anti-screen strategy:
“Our sentiment was that if we could have an insight to drive the campaign that flipped the category on its head, then all the dollars that other people are spending glorifying becoming lost in your screen or melding with your phone are actually making our point for us.”
The problem of glowing rectangles is a subject close to my heart, and Matt Jones has been bothered by the increase in mobile glowing attention-wells.
I think Microsoft & Crispin Porter + Bogusky’s advertising strategy stands out in a world full of slick floaty media. The only problem is that without any strategy towards tangible interaction, I’m not sure the ’tiles’ interaction concept is strong enough to actually take people’s attention out of the glass.
Posted in Film, Interaction design, Mobility, Ubicomp5 Comments on A phone to save us from our screens?Touch
Posted in Mobility, Project, Research, Technology, UbicompNokia 3220 with NFC
h3. First impressions
Overall the interaction between phone and RFID tags has been good. The reader/writer is on the base of the phone, at the bottom. This seems a little awkward to use at first, but slowly becomes natural. When I have given it to others, their immediate reaction is to point the top of the phone to the tag, and nothing happens. There follows a few moments of explaining as the intricacies of RFID and looking at the phone, with it’s Nokia ‘fingerprint’ icon. As phones increasingly become replacements for ‘contactless cards’, it seems likely that this interaction will become more habitual and natural.
Once the ‘service discovery’ application is running, the read time from tags is really quick. The sharp vibrations and flashing lights add to a solid feeling of interacting with something, both in the haptic and visual senses. This should turn out to be a great platform for embodied interaction with information and function.
The ability to read and write to tags makes it potentially adaptive as a platform wider than just advertising or ticketing. As an interaction designer I feel quite enabled by this technology: the three basic functions (making phonecalls, going to URLs, or sending SMSs) are enough to start thinking about tangible interactions without having to go and program any Java midlets or server-side applications.
I’m really happy that Nokia is putting this technology into a ‘low-end’ phone rather than pushing it out in a ‘smartphone’ range. This is where there is potential for wider usage and mass-market applications, especially around gaming and content discovery.
h3. Improvements
I had some problems launching the ‘service discovery’ application. Sometimes it works, sometimes it doesn’t and it’s difficult to tell why this is. It would be great to be able to place the phone on the table, knowing that it will respond to a tag, but it was just a little too unreliable to do that without checking to see that it had responded. The version I have still says it’s a prototype, so this may well be sorted out by the released version.
Overall there is a lack of integration between the service discovery application and the rest of the system: Contacts, SMS archive and service bookmarks etc. At the moment we need to enter the application to write and manage tags, or to give a ‘shortcut’ to another phone, but it seems that, as with bluetooth and IR, this should be part of the contextual menus that appear under ‘Options’ within each area of the phone. There are also some infuriating prompts that appear when interacting with URL, more details below.
h3. Details
p(caption). The phone opens the ‘service discovery’ application whenever it detects a compatible RFID tag near the base of the phone (when the keypad lock is off). This part is a bit obscure: sometimes it doesn’t ‘wake up’ for a tag, and the application needs to be loaded before it will read properly. Once the application is open (about 2-3 seconds) the read time of the tags seems instantaneous.
p(caption). The shortcuts menu gives access to shortcuts. Confusingly, this is different from ‘bookmarks’ and the ‘names’ list on the phone, although names can be searched from within the application. I think tighter integration with the OS is called for.
p(caption). Shortcuts can be added, edited, deleted, etc. in the same way as contacts. They can be ‘Given’ to another phone or ‘Written’ to a tag.
p(caption). There are three kinds of shortcuts: Call, URL or SMS. ‘Call’ will create a call to a pre-defined number, ‘URL’ will load a pre-defined URL, and ‘SMS’ will send a pre-defined SMS to a particular number. This part of the application has the most room for innovative extensions: we should be able to set the state of the phone, change profiles, change themes, download graphics, etc. This can be achieved by loading URLs, but URLs and mobiles don’t mix, so why should we be presented with them, when there could be a more usable layer inbetween? There could also be preferences for prompts: at the moment each action has to be confirmed with a yes or a no, but in some secure environments it would be nice to be able to have a function launched without the extra button push.
p(caption). If a tag contains no data, then we are notified and placed back on the main screen (as happened when I tried to write to my Oyster card).
p(caption). If the tag is writeable we are asked which shortcut to write to the tag.
p(caption). When we touch a tag with a shortcut on it, a prompt appears asking for confirmation. This is a level of UI to prevent mistakes, and a certain level of security, but it also reduces the overall usability of the system. With URL launching, there are two stages of confirmation, which is infuriating. There needs to be some other mode of confirmation, and the ‘service discovery’ app needs to somehow be deeper in the system to avoid these double button presses.
p(caption). Lastly, there is a log of actions. Useful to see if the application has been reading something in your bag or wallet, without you knowing…
Posted in Interaction design, Mobility, Research, Ubicomp, Usability20 Comments on Nokia 3220 with NFCGraphic language for touch
This work explores the visual link between information and physical things, specifically around the emerging use of the mobile phone to interact with RFID or NFC. It was a presentation and poster at Design Engaged, Berlin on the 11th November 2005.
Download the icons (PDF, 721KB, Gif preview).
As mobile phones are increasingly able to read and write to RFID tags embedded in the physical world, I am wondering how we will appropriate this for personal and social uses.
I’m interested in the visual link between information and physical things. How do we represent an object that has digital function, information or history beyond it’s physical form? What are the visual clues for this interaction? We shouldn’t rely on a kind of mystery meat navigation (the scourge of the web-design world) where we have to touch everything to find out it’s meaning.
This work doesn’t attempt to be a definitive system for marking physical things, it is an exploratory process to find out how digital/physical interactions might work. It uncovers interesting directions while the technology is still largely out of the hands of everyday users.
h3. Reference to existing work
p(caption). Click for larger version.
The inspiration for this is in the marking of public space and existing iconography for interactions with objects: push buttons on pedestrian crossings, contactless cards, signage and instructional diagrams.
This draws heavily on the substantial body of images of visual marking in public space. One of the key findings of this research was that visibility and placement of stickers in public space is an essential part of their use. Current research in ubicomp and ‘locative media’ is not addressing these visibility issues.
There is also a growing collection of existing iconography in contactless payment systems, with a number of interesting graphic treatments in a technology-led, vernacular form. In Japan there are also instances of touch-based interactions being represented by characters, colours and iconography that are abstracted from the action itself.
I have also had great discussions with Ulla-Maaria Mutanen and Jyri Engestr?m who have been doing interesting work with thinglinks and the intricate weaving of RFID into craft products.
h3. Development
Sketching and development revealed five initial directions: circles, wireless, card-based, mobile-based and arrows (see the poster for more details). The icons range from being generic (abstracted circles or arrows to indicate function) to specific (mobile phones or cards touching tags).
Arrows might be suitable for specific functions or actions in combinations with other illustrative material. Icons with mobile phones or cards might be helpful in situations where basic usability for a wide range of users is required. Although the ‘wireless’ icons are often found in current card readers, they do not successfully indicate the touch-based interactions inherent in the technology, and may be confused with WiFi or Bluetooth. The circular icons work at the highest level, and might be most suitable for generic labelling.
For further investigation I have selected a simple circle, surrounded by an ‘aura’ described by a dashed line. I think this successfully communicates the near field nature of the technology, while describing that the physical object contains something beyond its physical form.
In most current NFC implementations, such as the 3220 from Nokia and many iMode phones, the RFID reader is in the bottom of the phone. This means that the area of ‘activation’ is obscured in many cases by the phone and hand. The circular iconography allows for a space to be marked as ‘active’ by the size of the circle, and we might see it used to mark areas rather than points. Usability may improve when these icons are around the same size as the phone, rather than being a specific point to touch.
h3. Work in progress
This is early days for this technology, and this is work-in-progress. There is more to be done in looking at specific applications, finding suitable uses and extending the language to cover other functions and content.
Until now I have been concerned with generic iconography for a digitally augmented object. But this should develop into a richer language, as the applications for this type of interaction become more specific, and related to the types of objects and information being used. For example it would be interesting to find a graphic treatment that could be applied to a Pokemon sticker offering power-ups as well as a bus stop offering timetable downloads.
I’m also interested in the physical placement of these icons. How large or visible should they be? Are there places that should not be ‘active’? And how will this fit with the natural, centres of gravity of the mobile phone in public and private space.
I’ll expand on these things in a few upcoming projects that explore touch-based interactions in personal spaces.
Feel free to use and modify the icons, I would be very interested to see how they can be applied and extended.
h3. Visual references
Oyster Card, Transport for London.
eNFC, Inside Contactless.
Paypass, Mastercard.
ExpressPay, American Express.
FeliCa, Sony.
MiFare, various vendors.
Suica, JR, East Japan Railway Company.
RFID Field Force Solutions, Nokia.
NFC shell for 3220, Nokia.
ERG Transit Systems payment, Dubai.
Various generic contactless vendors.
Contactless payment symbol, Mastercard.
Open Here, Paul Mijksenaar, Piet Westendorp, Thames and Hudson, 1999.
Understanding Comics, Scott McCloud, Harper, 1994
Embodied interaction in music
I too have “ditched”:http://interconnected.org/home/2005/04/12/my_40gb_ipod_has my large iPod for the “iPod Shuffle”:http://www.apple.com/ipodshuffle/, finding that “I love the white-knuckle ride of random listening”:http://www.cityofsound.com/blog/2005/01/the_rise_and_ri.html. But that doesn’t exclude the need for a better small-screen-based music experience.
The pseudo-analogue interface of the iPod clickwheel doesn’t cut it. It can be difficult to control when accessing huge alphabetically ordered lists, and the acceleration or inertia of the view can be really frustrating. The combinations of interactions: clicking into deeper lists, scrolling, clicking deeper, turn into long and tortuous experiences if you are engaged in any simultaneous activity. Plus its difficult to use through clothing, or with gloves.
h3. Music and language
!/images/embodied_music_search.jpg!
My first thought was something “Jack”:http://www.jackschulze.co.uk and I discussed a long time ago, using a phone keypad to type the first few letters of a artist, album or genre and seeing the results in real-time, much like “iTunes”:http://www.apple.com/itunes/jukebox.html does on a desktop. I find myself using this a lot in iTunes rather than browsing lists.
“Predictive text input”:http://www.t9.com/ would be very effective here, when limited to the dictionary of your own music library. (I wonder if “QIX search”:http://www.christianlindholm.com/christianlindholm/2005/02/qix_from_zi_cor.html would do this for a music library on a mobile?)
Maybe now is the time to look at this as we see “mobile”:http://www.sonyericsson.com/spg.jsp?cc=gb&lc=en&ver=4000&template=pp1_loader&php=php1_10245&zone=pp&lm=pp1&pid=10245 “phone”:http://www.nokia.com/n91/ “music convergence”:http://www.engadget.com/entry/1234000540040867/.
h3. Navigating through movement
!/images/embodied_music_squeeze.jpg!
Since scrolling is inevitable to some degree, even within fine search results, what about using simple movement or tilt to control the search results? One of the problems with using movement for input is context: when is movement intended? And when is movement the result of walking or a bump in the road?
!/images/embodied_music_squeeze2.jpg!
One solution could be a “squeeze and shake” quasi-mode: squeezing the device puts it into a receptive state.
!/images/embodied_music_tilt.jpg!
Another could be more reliance on the 3 axes of tilt, which are less sensitive to larger movements of walking or transport.
h3. Gestures
!/images/embodied_music_gestures.jpg!
I’m not sure about gestural interfaces, most of the prototypes I have seen are difficult to learn, and require a certain level of performativity that I’m not sure everyone wants to be doing in public space. But having accelerometers inside these devices should, and would, allow for the hacking together other personal, adaptive gestural interfaces that would perhaps access higher level functions of the device.
!/images/embodied_music_earbuds.jpg!
One gesture I think could be simple and effective would be covering the ear to switch tracks. To try this out we could add a light or capacitive touch sensor to each earbud.
With this I think we would have trouble with interference from other objects, like resting the head against a wall. But there’s something nicely personal and intimate about putting the hand next to the ear, as if to listen more intently.
h3. More knobs
!/images/embodied_music_knobs.jpg!
Things that are truly analogue, like volume and time, should be mapped to analogue controls. I think one of the greatest unexplored areas in digital music is real-time audio-scrubbing, currently not well supported on any device, probably because of technical constraints. But scrubbing through an entire album, with a directly mapped input, would be a great way of finding the track you wanted.
Research projects like the “DJammer”:http://www.hpl.hp.com/research/mmsl/projects/djammer/ are starting to look at this, specifically for DJs. But since music is inherently time-based there is more work to be done here for everyday players and devices. Let’s skip the interaction design habits we’ve learnt from the CD era and go back to vinyl 🙂
h3. Evolution of the display
!/images/embodied_music_display.jpg!
Where displays are required, I hope we can be free of small, fuzzy, low-contrast LCDs. With new displays being printable on paper, textiles and other surfaces there’s the possibility of improving the usability, readability and “glanceability” of the display.
We are beginning to see signs of this with this OLED display on this “Sony Network Walkman”:http://dapreview.net/comment.php?comment.news.1086 where the display is under the surface of the product material, without a separate “glass” area.
!/images/embodied_music_display2.jpg!
For the white surface of an iPod, the high-contrast, “paper-like surfaces”:http://www.polymervision.com/New-Center/Downloads/Index.html of technologies like e-ink would make great, highly readable displays.
h3. Prototyping
!/images/embodied_music_prototype.jpg!
So I really need to get prototyping with accelerometers and display technologies, to understand simple movement and gesture in navigating music libraries. There are other questions to answer: I’m wondering if using movement to scroll through search results would create the appearance of a large screen space, through the lens of a small screen. As with “bumptunes”:http://interconnected.org/home/2005/03/04/apples_powerbook, I think many more opportunities will emerge as we make these things.
h3. More reading
“Designing for Shuffling”:http://www.cityofsound.com/blog/2005/04/designing_for_s.html
“Thoughts on the iPod Shuffle”:http://interconnected.org/home/2005/04/22/there_are_two
“Bumptunes”:http://interconnected.org/home/2005/03/04/apples_powerbook
“Audioclouds/gestural interaction”:http://www.dcs.gla.ac.uk/~jhw/audioclouds/
“Sound objects”:http://www.elasticspace.com/2005/02/sound-objects
“DJammer”:http://www.hpl.hp.com/research/mmsl/projects/djammer/
“On the body”:http://people.interaction-ivrea.it/b.negrillo/onthebody/
“Runster”:http://communications.siemens.com/cds/frontdoor/0,2241,hq_en_0_91525_rArNrNrNrN,00.html
Spatial memory at Design Engaged 2004
Notes on two related projects:
h2. 1. Time that land forgot
* A “project”:http://www.elasticspace.com/timeland/ in collaboration with Even Westvang
* Made in 10 days at the Icelandic locative media workshop, summer 2004
* Had the intention of making photo archives and gps trails more useful/expressive
* Looked at patterns in my photography: 5 months, 8000 photos, visualised them by date / time of day. Fantastic resource for me: late night parties, early morning flights, holidays and the effect of midnight sun is visible.
* Looking now to make it useful as part of more pragmatic interface, to try other approaches less about the abstracted visualisation
* “prototype”:http://www.elasticspace.com/timeland
* “info, details, research and source code”:http://www.elasticspace.com/2004/07/timeland
* “time visualisation”:http://www.elasticspace.com/images/photomap_times_large.gif
h2. 2. Marking in urban public space
I’ve also been mapping stickering, stencilling and flyposting: walking around with the camera+gps and “photographing examples of marking”:http://www.flickr.com/photos/timo/sets/8380/ (not painted graffiti).
!/images/marking01.jpg!
This research looks at the marking of public space by investigating the physical annotation of the city: stickering, stencilling, tagging and flyposting. It attempts to find patterns in this marking practice, looking at visibility, techniques, process, location, content and audience. It proposes ways in which this marking could be a layer between the physical city and digital spatial annotation.
h3. Some attributes of sticker design
* *Visibility*: contrast, monochromatic, patterns, bold shapes, repetition
* *Patina*: history, time, decay, degredation, relevance, filtering, social effects
* *Physicality*: residue of physical objects: interesting because these could easily contain digital info
* *Adaptation and layout*: layout is usually respectful, innovative use of dtp and photocopiers, adaptive use of sticker patina to make new messages on top of old
!/images/marking02.jpg!
Layers of information build on top of each other, as with graffiti, stickers show their age through fading and patina, flyposters become unstuck, torn and covered in fresh material. Viewed from a distance the patina is evident, new work tends to respect old, and even commercial flyposting respects existing graffiti work.
!/images/marking03.jpg!
Techniques vary from strapping zip-ties through cardboard and around lampposts for large posters, to simple hand-written notes stapled to trees, and short-run printed stickers. One of the most fascinating and interactive techniques is the poster offering strips of tear-off information. These are widely used, even in remote areas.
!/images/marking04.jpg!
Initial findings show that stickers don’t relate to local space, that they are less about specific locations than about finding popular locations, “cool neighbourhoods” or just ensuring repeat exposure. This is opposite to my expectations, and perhaps sheds some light on current success/failure of spatial annotation projects.
I am particularly interested in the urban environment as an interface to information and an interaction layer for functionality, using our spatial and navigational senses to access local and situated information.
There is concern that in a dense spatially annotated city we might have an overload of information, what about filtering and fore-grounding of relevant, important information? Given that current technologies have very short ranges (10-30mm), we might be able to use our existing spatial skills to navigate overlapping information. We could shift some of the burden of information retrieval from information architecture to physical space.
I finished by showing this animation by Kriss Salmanis, a young Latvian artist. Amazing re-mediation of urban space through stencilling, animation and photography. (“Un ar reizi naks tas bridis” roughly translates as “And in time the moment will come”.
h2. Footnotes/references
p(footnote). Graffiti Archaeology, Cassidy Curtis
otherthings.com/grafarc
p(footnote). Street Memes, collaborative project
streetmemes.com
p(footnote). Spatial annotation projects list
elasticspace.com/2004/06/spatial-annotation
p(footnote). Nokia RFID kit for 5140
nokia.com/nokia/0,,55739,00.html
p(footnote). Spotcodes, High Energy Magic
highenergymagic.com/spotcode
p(footnote). ?Mystery Meat navigation?, Vincent Flanders
fixingyourwebsite.com/mysterymeat.html
p(footnote). RDF as barcodes, Chris Heathcote
undergroundlondon.com/antimega/archives/2004_02.html
p(footnote). Implementation: spatial literature
nickm.com/implementation
p(footnote). Yellow Arrow
yellowarrow.org
Time that land forgot
There are two versions: a “low-bandwidth”:/timeland/noimages.html no-image version and a “high-bandwidth”:/timeland/ version with images. There is also a “Quicktime movie”:http://polarfront.org/time_land_forgot.mov for people that can’t run Flash at a reasonable frame rate.
We have made the “source code”:http://www.polarfront.org/timeland.zip (.zip file) available for people that want to play with it, under a General Public License (GPL).
h2. Background: Narrative images and GPS tracks
Over the last five years Timo has been photographing daily experience using a digital camera and archiving thousands of images by date and time. Transient, ephemeral and numerous; these images have become a sequential narrative beyond the photographic frame. They sit somewhere between photography and film, with less emphasis on the single image in re-presenting experience.
For the duration of the workshop Timo used a GPS receiver to record tracklogs, capturing geographic co-ordinates for every part of the journey. It is this data that we explore here, using it to provide a history and context to the images.
This project is particularly relevant as mobile phones start to integrate location-aware technology and as cameraphone image-making becomes ubiquitous.
h2. Scenarios
We discussed the context in which we were creating an application: who would use it, and what would they be using it for? In our case, Timo is using the photographs as a personal diary, and this is the first scenario: a personal life-log, where visualisations help to recollect events, time-periods and patterns.
Then there is the close network of friends and family, or participants in the same journey, who are likely to invest time looking at the system and finding their own perspective within it. Beyond that there is a wider audience interested in images and information about places, that might want a richer understanding of places they have never been, or places that they have experienced from a different perspective.
Images are immediately useful and communicative for all sorts of audiences, it was less clear how we should use the geographic information, the GPS tracks might only be interesting to people that actually participated in that particular journey or event.
h2. Research
We looked at existing photo-mapping work, discovering a lot of projects that attempted to give images context by placing them within a map. But these visualisations and interfaces seemed to foreground the map over the images and photos embedded in maps get lost by layering. The problem was most dramatic with topographic or street maps full of superfluous detail, detracting from the immediate experience of the image.
Even the exhaustive and useful research from Microsoft’s “World Wide Media Index (WWMX)”:http://wwmx.org/ arrives at a somewhat unsatisfactory visual interface. The paper details five interesting mapping alternatives, and settles on a solution that averages the number of photos in any particular area, giving it a representatively scaled ‘blob’ on a street map (see below). Although this might solve some problems with massive data-sets, it seems a rather clunky interface solution, overlooking something that is potentially beautiful and communicative in itself.
!/images/photomap_wwmx.jpg!
p(caption). See “http://wwmx.org/docs/wwmx_acm2003.pdf”:http://wwmx.org/docs/wwmx_acm2003.pdf page 8
Other examples (below) show other mapping solutions; Geophotoblog pins images to locations, but staggers them in time to avoid layering, an architectural map from Pariser Platz, Berlin gives an indication of direction, and an aerial photo is used as context for user-submitted photos at Tokyo-picturesque. There are more examples of prior work, papers and technologies “here”:http://www.elasticspace.com/index.php?id=44.
!/images/photomap_berlin.jpg!
p(caption). Image from “Pariser Platz Berlin”:http://www.fes.uwaterloo.ca/u/tseebohm/berlin/map-whole.html
!/images/photomap_geophotoblog.jpg!
p(caption). Image from “geophotoblog”:http://brainoff.com/geophotoblog/plot/
!/images/photomap_tokyo.jpg!
p(caption). Image from “Tokyo Picturesque”:http://www.downgoesthesystem.com/devzone/exiftest/final/
By shifting the emphasis to location the aspect most clearly lacking in these representations is _time_ and thereby also the context in which the images can most easily form narrative to the viewer. These images are subordinate to the map, thereby removing the instant expressivity of the image.
We feel that these orderings make spatially annotated images a weaker proposition than simple sequential images in terms of telling the story of the photographer. This is very much a problem of the seemingly objective space as contained by the GPS coordinates versus the subjective place of actual experience.
h2. Using GPS Data
We started our technical research by looking at the data that is available to us, discovering data implicit in the GPS tracks that could be useful in terms of context, many of which are seldom exposed:
* location
* heading
* speed in 3 dimensions
* elevation
* time of day
* time of year
With a little processing, and a little extra data we can find:
* acceleration in 3 dimensions
* change in heading
* mode of transportation (roughly)
* nearest landmark or town
* actual (recorded) temperature and weather
* many other possibilities based on local, syndicated data
Would it be interesting to use acceleration as a way of looking at photos? We would be able to select arrivals and departures by choosing images that were taken at moments of greatest acceleration or deceleration. Would these images be the equivalent of ‘establishing’, ‘resolution’ or ‘transition’ shots in film, generating a good narrative frame for a story?
Would looking at photos by a specific time of day give good indication of patterns and habits of daily life? The superimposition of daily unfolding trails of an habitual office dwelling creature might show interesting departures from rote behaviour.
h2. Using photo data
By analysing and visualising image metadata we wanted to look for ways of increasing the expressive qualities of a image library. Almost all digital images are saved with the date and time of capture but we also found unexplored tags in the EXIF data that accompany digital images:
* exposure
* aperture
* focus distance
* focal length
* white balance
We analysed metadata from almost 7000 photographs taken between 18 February – 26 July 2004 to see patterns that we might be able to exploit for new interfaces. We specifically looked for patterns that helped identify changes over the course of the day.
!/images/photomap_EXIF_small.gif!:/images/photomap_EXIF_large.gif
p(caption). Shutter, Aperture, Focal length and File size against time of day (click for larger version)
This shows an increase in shutter speed and aperture during the middle of the day. The images also become sharper during daylight hours, indicated by an increased file-size.
!/images/photomap_times_small.gif!:/images/photomap_times_large.gif
p(caption). Date against time of day (click for larger version)
This shows definite patterns: holidays and travels are clearly visible (three horizontal clusters towards the top) as are late night parties and early morning flights. This gives us huge potential for navigation and interface. Image-based ‘life-log’ applications like “Flickr”:http://www.flickr.com and “Lifeblog”:http://www.nokia.com/lifeblog are appearing, the visualisation of this light-weight metadata will be invaluable for re-presenting and navigating large photographic archives like these.
Matias Arje – also at the Iceland workshop – has done “valuable work”:http://arje.net/locative/ in this direction.
h2. Technicalities
Getting at the GPS and EXIF data was fairly trivial though it did demand some testing and swearing.
We are both based on Apple OS X systems, and we had to borrow a PC to get the tracklogs reliably out of the Timo’s GPS and into Garmin’s Mapsource. We decided to use GPX as our format for the GPS tracks, GPSBabel happily created this data from the original Garmin files.
The EXIF was parsed out of the images by a few lines of Python using the EXIF.py module and turned into another XML file containing image file name and timestamp.
We chose Flash as the container for the front end, it is ubiquitous and Even’s programming poison of choice for visualisation. Flash reads both the GPX and EXIF XML files and generates the display in real-time.
More on our choices of technologies “here”:http://www.elasticspace.com/2004/07/geo-referenced-photography.
h2. First prototype
!/images/timeland_screenshot06.jpg!:/timeland/
“View prototype”:http://www.elasticspace.com/timeland/
Mirroring Timo’s photography and documentation effort, Even has invested serious time and thought in “dynamic continous interfaces”:http://www.polarfront.org. The first prototype is a linear experience of a journey, suitable for a gallery or screening, where images are overlaid into textural clusters of experience. It shows a scaling representation of the travel route based on the distance covered the last 20-30 minutes. Images recede in scale and importance as they move back in time. Each tick represents 1 minute, every red tick represents an hour.
We chose to create a balance of representation in the interface around a set of prerogatives: first image (for expressivity), then time (for narrative), then location (for spatialising, and commenting on, image and time).
In making these interfaces there is the problem of scale. The GPS data itself has a resolution down to a few meters, but the range of speeds a person can travel at varies wildly through different modes of transportation. The interface therefore had to take into account the temporo-spatial scope of the data and scale the resolution of display accordingly.
This was solved by creating a ‘camera’ connected to a spring system that attempts to center the image on the advancing ‘now’ while keeping a recent history of 20 points points in view. The parser for the GPS tracks discards the positional data between the minutes and the animation is driven forward by every new ‘minute’ we find in the track and that is inserted into the view of the camera. This animation system can both be used to generate animations and interactive views of the data set.
There are some issues with this strategy. There will be discontinuities in the tracklogs as the GPS is switched off during standstill and nights. Currently the system smoothes tracklog time to make breaks seem more like quick transitions.
The system should ideally maintain a ‘subjective feeling’ of time adjusted to picture taking and movement; a temporal scaling as well as a spatial scaling. This would be an analog to our own remembering of events: minute memories from double loop roller-coasters, smudged holes of memory from sleepy nights.
Most of the tweaking in the animation system went into refining the extents system around the camera history & zoom, acceleration and friction of spring systems and the ratio between insertion of new points and animation ticks.
In terms of processing speed this interface should ideally have been built in Java or as a stand alone application, though tests have shown that Flash is able to parse a 6000 point tracklog, and draw it on screen along with 400 medium resolution images. Once the images and points have been drawn on the canvas they animate with reasonable speed on mid-spec hardware.
h2. Conclusions
This prototype has proved that many technical challenges are solvable, and given us a working space to develop more visualisations, and interactive environments, using this as a tool for thinking about wider design issues in geo-referenced photography. We are really excited by the sense of ‘groundedness’ the visualisation gives over the images, and the way in which spatial relationships develop between images.
For Timo it has given a new sense of spatiality to image making, the images are no longer locked into a simple sequential narrative, but affected by spatial differences like location and speed. He is now experimenting with more ambient recording: taking a photo exactly every 20 minutes for example, in an effort to affect the presentation.
h2. Extensions
Another strand of ideas we explored was using the metaphor of a 16mm “Steenbeck”:http://www.harvardfilmarchive.org/gallery/images/conservation_steenbeck.jpg edit deck: scrubbing 16mm film through the playhead and watching the resulting sound and image come together: we could use the scrubbing of an image timeline, to control all of the other metadata, and give real control to the user. It would be exciting to explore a spatial timeline of images, correlated with contextual data like the GPS tracks.
We need to overcome the difficulty obtaining quality data, especially if we expect this to work in an urban environment. GPS is not passive, and “requires a lot of attention to record tracks”:http://www.elasticspace.com/index.php?id=4. Overall our representation doesn’t require location accuracy, just consistency and ubiquity of data; we hope that something like cell-based tracking on a mobile phone becomes more ubiquitous and usable.
We would like to experiment further with the extracted image metadata. For large-scale overviews, images could be replaced by a simple rectangular proxy, coloured by the average hue of the original picture and taking brightness (EV) from exposure and aperture readings. This would show the actual brightness recorded by the camera’s light meter, instead of the brightness of the image.
Imagine a series of images from bright green vacation days, dark grey winter mornings or blue Icelandic glaciers, combined with the clusters and patterns that time-based visualisation offers.
We would like to extend the data sets to include other people: from teenagers using gps camera phones in Japan to photojournalists. How would visualisations differ, and are there variables that we can pre-set for different uses? And how would the map look with multiple trails to follow, as a collaboration between multiple people and multiple perspectives?
At a technical level it would be good to have more integration with developing standards: we would like to use “Locative packets”:http://locative.net/workshop/index.cgi?Locative_Packets, just need more time and reference material. This would make it useful as a visualisation tool for other projects, “Aware”:http://aware.uiah.fi/ for example.
We hope that the system will be used to present work from other workshops, and that an interactive installation of the piece can be set up at “Art+Communication”:http://rixc.lv/04/.
h2. Biographies
Even Westvang works between interaction design, research and artistic practice. Recent work includes a slowly growing digital organism that roams the LAN of a Norwegian secondary school and an interactive installation for the University of Oslo looking at immersion, interaction and narrative. Even lives and works in Oslo. His musings live on “polarfront.org”:http://www.polarfront.org and some of his work can be seen at “bengler.no”:http://www.bengler.no.
Timo Arnall is an interaction designer and researcher working in London, Oslo and Helsinki. Recent design projects include a social networking application, an MMS based interactive television show and a large media archiving project. Current research directions explore mapping, photography and marking in public places. Work and research can be seen at “elasticspace.com”:http://www.elasticspace.com.
h2. Screenshots
!/images/timeland_screenshot01.jpg!
!/images/timeland_screenshot04.jpg!
!/images/timeland_screenshot08.jpg!
!/images/timeland_screenshot11.jpg!
!/images/timeland_screenshot12.jpg!
!/images/timeland_screenshot16.jpg!
!/images/timeland_screenshot19.jpg!
!/images/timeland_screenshot20.jpg!
Posted in Art, Experience design, Mapping, Mobility, Narrative, Photography, Place, Project, TravelLoop city workshop
h3. Bill Hillier: Cities are movement economies
* http://www.spacesyntax.com/
h3. In the city there are
* space explorers: children, homeless, vendors, skateboarders,
* space utilisers: commuters, workers,
h3. Two ways of looking at the city
* exocentric: external, connected
* egocentric: centred, point of view,
h3. Spatial organisation
* Large, diverse research field.
* Abler, Ronald Adams: ‘Spatial organisation: the geographer’s view
of the world’
h3. Relative space
* Expressing thematic data through spatial differentiation
h3. Scaling areas according to non-geographic data
* Political maps based on size of army
* Map of USA based on Elvis concerts
h3. Time space
* Irina Vasiliev: ‘Design issues for mapping time’
* Time as a way of measuring space (one conclusion: world is
shrinking)
h3. Taxicab geography
* Grid systems make diagonal movement problematic
* There is study of movement in grid spaces, showing multiple optimum routes: a big L shape is the same distance as a zig-zag.
* The grid is no longer in Euclidian space
h3. Social space
* Philip Thiel: Spatial annotation methods
h3. John S. Adams:
* Human geographer
h3. mapped human interaction over 1 day
* vertical axis: time
* horizontal axis: distance
* made 3D diagrams of this multi-dimensional space, showing relative
distances travelled and communicated with over 1 day.
* Social network maps
h3. Mental mapping
* spatial representations of the brain or memory
* In some ways the analysis by Lynch and others has failed, because
they focused on trying to know everything about people’s mental
maps of the city.
* Richard Long: walking project
h3. Imagined cities
* Norman Klein: History of forgetting
* Fictional writers form mental models of cities
* Calvino
h3. Textmaps
* Dietmar recreated the shape of LA by phoning people and asking
directions
* PML maps
h3. Single parameter mapping
* Boylan height maps: Denis Wood
* Maps of Halloween lanterns in an area
h3. Multiple parameter mapping
* Correlating space
* Chernoff faces: iconographic representations of faces, with
expressions that map to different social conditions
* Eugene Turner
* Correlating socio-economic factors is common
h3. Mapping as a game
* Raoul Bunschoten
h3. Narrowed the analysis of space down to very simple
p rocedures
* erasure
* origination
* transformation
* migration
* Mapped results as a synthesis?
h3. Photographic / media mapping
* Tokyo Nobody
* Images with text removed, replaced with a textmap
* Text / image project… ?
* Graffiti archaeology project
* Time lapse as a tool: mapping crowds
* Threshold linear key as a tool: RCA project…
h3. Diagrammatic / information mapping
* Tufte
* Information diagrams representing time, space, actions, events,
people, cause/effect etc.
h3. Collaborative mapping
* multiple authorship over shared themes
h3. Sarah
* Presented her NY Green space project, in which access to green
space is correlated with socio-economic factors. Refer to Social
design notes weblog.
h3. Some ideas for mapping
* Children’s tactile book: sandpaper for Asphalt, felt for grass.
* Litter, sky cover, text, colours, people, edges, boundaries, nodes
* Use gps and digital camera. Use a compass to always orient the
camera to North, or relevant reference. Then map the space with
textures or sky cover (down or up). Could make a great map.
* A method for collaborative presentation might be to use a projector
to trace physical space onto a wall or large open space, then to
layer drawn annotations. A public presentation could be achieved by
projecting digital data (photos, textures, movement) onto this
annotated area, for interesting layered correlations.
* Everyone has their own agenda when approaching a space: personal
ways of looking, awareness, attractions and unnatractions. Could
try to map what a space makes you think instantly, from one vantage
point, or multiple, correlated vantage points.
* Bluetooth mapping of devices. Our personal ‘Auras’ are becoming
public and this might be useful for mapping.
h3. What kind of data can we collect about the city and it’s usage,
that is really reliable and plentiful? The audioscrobbler mapping
example shows how really simple data can be mapped into
extraordinary useful spatial representations, just because it’s
high quality and plentiful.
* Geographic data is potentially plentiful, because there is a lot of
effort put into mapping space.
* What other things are mapped with effort, or easily?
—–
Mobile outskirts workshop
There is a “workshop wiki”:http://locative.rixc.lv/tcm/workshops/index.cgi?Location_Norway and “media archive”:http://aware.uiah.fi/packet/?id=TCM that we are attempting to keep updated via fairly limited wireless coverage.
A painless and creative 15 hour bus drive took us from Trondheim up to the islands of Lofoten, in a bus full of GPS receivers, cameras and “impromptu artworks”:http://www.boutiquevizique.com/analoGps/.
Posted in Art, General, Mapping, Media, Mobility, Place, Technology, TravelOutside In
Outside In is a forum for involving new voices, media and practices in a discourse about the use and design of public space. It took place from 14 – 15 June 2004.
Roda Sten is amazing, below a suspension bridge, with huge concrete creations. Really windy, but calm inside the lecture space. Here are my notes and a few pictures.
!/images/outsidein01.jpg!
!/images/outsidein02.jpg!
!/images/outsidein03.jpg!
!/images/outsidein04.jpg!
!/images/outsidein05.jpg!
!/images/outsidein06.jpg!
!/images/outsidein07.jpg!
!/images/outsidein08.jpg!
!/images/outsidein09.jpg!
!/images/outsidein10.jpg!
!/images/outsidein11.jpg!
h2. Day 1
h3. Session 2: Hacking the streets (I missed the 1st workshop)
h3. Space Hijackers
* Putting memories in spaces: spaces arent the same after having been disrupted. after ‘reclaim the streets’ or a ‘circle line party’ you can’t see the space in the same way.
* Distinction between public and private. What is it?
* Public space doesn’t exist anymore.
* Ken’s new city hall is half private half public (private investment was involved in the building, so protests cannot happen outside)
* Do we need institutions in order to do events, is that the only way to do it legally?
* What’s stopping people from doing these things is not necessarily capitalism, but the fear of looking like a pillock: self-regulation is a big factor. Can spark things to let down inhibitions or shackles. Uses example of the scooter, became a kids toy and then it wasn’t cool anymore.
* What’s the connection between anarchism and these spontaneous events. Emergent order is interesting, so much control over actions, and the ways people move through the city. How does this relate to anarchy? Is this anarchy?
h3. Zevs
* The city is a workshop: not just walls to tag
* Shadows of urban furniture: really good
* Visual kidknapping: Lavazza woman gets cut out of the frame
* Big poster with bleeding eyes
* Uses a high pressure water jet to clean the city, but also write at the same time.
* Digs at the notion of authorship, a site where people find work on the streets
* The work is anonymous, but there is the projection of authorial control behind it, its individual and definitely authored
* Would be interesting to explore more about Graffiti authorship: how do public artists want to be recognised?
* Managing the mystique around the work and the author.
* Difference between author/instigator
* “Interview”:http://www.paris-art.com/modules-modload-interviews-travail-1592.html
* “Visual kidknapping”:http://www.visual-kidnapping.org/
h3. 3D bombing: Akim
* Polystyrene models, matched to fit specific city spaces
* City of names: what if the writers are the ones who build the houses?
h2. Day 2
h3. Session 3: Network experience
h3. “Jonah Brucker Cohen”:http://coin-operated.com/
* Wants to deconstruct network context
* Context: physical and social situation in which computation sits
* How does the network affect the output and experience
* Companies are claiming ownership of space because of signal
strength: strengthening signals to drown out free competion
* WiFihog: saps out all wifi bandwidth
* LAN party versus Flash Mob
* Simpletext: collaborative sms image searching on large screens
* re-mapping and changing the context of interfaces: what about
shifting consequences: changing the input/output relationship.
* Simpletext project: assigns an image search to inputted text
messages, and displays via jitter/max on a large screen.
* Steven Levy quote on hackers
h3. “Katherine Moriwaki”:http://kakirine.com/
* Altering space by altering the body
* character of a space
* remnants of things, people, individuals
* put magnets on wrists and fingers and bodies to reveal the proximity of electronic devices: unexpected connections to other people and lampposts. Nice.
h3. Data Climates: Pedro Sep?lveda Sandoval
* Living in a scanscape city
* electronic space, synthetic city
* Congestion charge as walled city, in electronic space
* London: highest density of cctv in the world
* will we decide to travel to areas based on the quality of electronic space
* A new architectural language for electronic space
* Houses without windows, just cameras. Can start to control life inside. Can also choose to use the weather channel as windows
* Pay a fee for personal surveillance: ask them to watch you all the way to the supermarket.
* The city of Yokohama was brought down by the coming of age party for 40,000 teenagers: the networks were overloaded with messages, because the teenagers didn’t want to talk face to face.
* Palm trees as cell towers (seen in south africa)
* Looked at a community in Hackney that were campaigning to not have a cell phone tower.
* Designed a house for them that would shield them from the signals, but they would have to give up cell phone connectivity. Designed it so that windows would open and close based on calls being made, or would give them 10 minute windows in which to make calls every 2 hours.
* Digital shelter: stand inside the line
h3. Round up
* These presentations all use the strategy of showing ‘hypothetical products’ that are really non-products. They are doing this, rather than providing platforms or design methodologies, or distributing resources and infrastructures for people to design their own systems. I understand the need for designers as visionaries, but this could be made more valuable and useful.
* specialists in electronic space could be similar to lighting design specialists in the ’70s. Will grow into a general field of understanding.
* Platforms and inftrastructure for technology is beyond architects, but understanding of the use and consequences is really important.
h3. Session 4
h3. Jocko Weyland
* Skateboarding as adaptive design: difference between skate parks and the street, skate parks become designed over time to mimic certain aspects of streets, but also according to innate, human skaters needs. A combination of factors go into making a good skateboarding space: free, alcohol, quality, location.
h3. Swoon
* New to NY: wanted to work outside gallery space, was inspired by collage of city streets. Not from a graffiti background, being a female, can do certain things outside the norms of graffiti.
* Changes billboards during the day, looks official.
* Open democratic visual space
* a visual direct democracy…
* Cuba used to have street art as a means of free expression, but outlawed by dictatorship
* Makes lightboxes with imagined cities, and mounts on the reverse side of construction site walls, with peepholes ‘peer here’
* Interesting mix of opportunism and ‘designed intervention’
* Sometimes driven purely by visual interest.
h3. “Michael Rakowitz”:http://www.possibleutopia.com/mike/
* Mike Davis: Public is phantom
* Bedouin as a model of sustainable nomadic communities
* Homeless use waste air from air conditioning (airvac exhaust ports) to stay warm and dry
* Homeless have receded to the peripheral vision of the public. Want to see and be seen.
* Seeing is important for living nomadically in the city.
* Started to map the heat and the power of the exhaust fans in the city. Found a high one at MIT plasma lab.
* Re-routed smell from from a bakery to an art gallery, to subvert a ‘high art’ re-appropriation of space
h2. Workshop ‘Loop City’
* “Dietmar Offenhuber”:http://residence.aec.at/wegzeit/ & Sara Hodges
* Showed Rybczynski’s film “New Book”:http://www.microcinema.com/titleResults.php?content_id=1190 using 9 frames: a good way of mapping space in the city. Starts off and the viewer is not sure if each frame is occurring synchronously, or in the same space, but a bus passes between all of the frames and the spatial link is made immediately. There is also a point where a plane flies overhead and all the actors look up: showing time synchronicity too.
h3. Looking at the city
* as a set of repeated actions
* as a playground: situationists
* as a balance of social as well as physical architectures