Satellite Lamps

Posted on Aug 20, 2014 in Film, Photography, Project, Research, Technology, Urbanism

Satellite_Lamps

Satellite Lamps is a project that reveals one of the most significant contemporary technology infrastructures, the Global Positioning System (GPS). Made with Einar Sneve Martinussen and Jørn Knutsen as part of the Yourban research project at AHO, it continues our project of revealing the materials of technologies that started in 2009 with RFID and WiFi.

GPS is widely used yet it’s invisible and few of us really have any idea of how it works or how it inhabits our everyday environments. We wanted to explore the cultural and material realities of GPS technology, and to develop new understandings about it through design.

“Satellite Lamps shows that GPS is not a seamless blanket of efficient positioning technology; it is a negotiation between radio waves, earth-orbit geometry and the urban environment. GPS is a truly impressive technology, but it also has inherent seams and edges.”

We created a series of lamps that change brightness according to the accuracy of received GPS signals, and when we photograph them as timelapse films, we start to build a picture of how these signals behave in actual urban spaces.

We’ve made a film that you can watch here, and published an extensive article that details very thoroughly how it was made and why. You can read more on how we explored GPS technology, how the visualisations were made, and about the popular cultural history of GPS.

Internet machine

Posted on May 13, 2014 in Exhibition, Film, Photography, Project, Research, Technology

InternetMachine01-web

Internet machine is a multi-screen film about the invisible infrastructures of the internet. The film reveals the hidden materiality of our data by exploring some of the machines through which ‘the cloud’ is transmitted and transformed.

Film: 6 min 40 sec, digital 4K, 25fps, stereo.
Installation: Digital projection, 3 x 16:10 screens, each 4.85m x 2.8m.
Medium: Digital photography, photogrammetry and 3D animation.

Internet machine (showing now at Big Bang Data or watch the trailer) documents one of the largest, most secure and ‘fault-tolerant’ data-centres in the world, run by Telefonica in Alcalá, Spain. The film explores these hidden architectures with a wide, slowly moving camera. The subtle changes in perspective encourage contemplative reflection on the spaces where internet data and connectivity are being managed.

In this film I wanted to look beyond the childish myth of ‘the cloud’, to investigate what the infrastructures of the internet actually look like. It felt important to be able to see and hear the energy that goes into powering these machines, and the associated systems for securing, cooling and maintaining them.

InternetMachine14-web

What we find, after being led through layers of identification and security far higher than any airport, are deafeningly noisy rooms cocooning racks of servers and routers. In these spaces you are buffeted by hot and cold air that blusters through everything.

InternetMachine09-web

Server rooms are kept cool through quiet, airy ‘plenary’ corridors that divide the overall space. There are fibre optic connections routed through multiple, redundant, paths across the building. In the labyrinthine corridors of the basement, these cables connect to the wider internet through holes in rough concrete walls.

InternetMachine16-web

Power is supplied not only through the mains, but backed up with warm caverns of lead batteries, managed by gently buzzing cabinets of relays and switches.

InternetMachine10-web

These are backed up in turn by rows of yellow generators, supplied by diesel storage tanks and contracts with fuel supply companies so that the data centre can run indefinitely until power returns.

InternetMachine03-web

The outside of the building is a facade of enormous stainless steel water tanks, containing tens of thousands of litres of cool water, sitting there in case of fire.

InternetMachine11-web

And up on the roof, to the sound of birdsong, is a football-pitch sized array of shiny aluminium ‘chillers’ that filter and cool the air going into the building.

InternetMachine15-web

In experiencing these machines at work, we start to understand that the internet is not a weightless, immaterial, invisible cloud, and instead to appreciate it as a very distinct physical, architectural and material system.

Production

Internet machine shoot

This was a particularly exciting project, a chance for an ambitious and experimental location shoot in a complex environment. Telefónica were particularly accommodating and allowed unprecedented access to shoot across the entire building, not just in the ‘spectacular’ server rooms. Thirty two locations were shot inside the data centre over the course of two days, followed by five weeks of post-production.

Internet-Machine-production-04

I had to invent some new production methods to create a three-screen installation, based on some techniques I developed over ten years ago. The film was shot using both video and stills, using a panoramic head and a Canon 5D mkIII. The video was shot using the Magic Lantern RAW module on the 5D, while the RAW stills were processed in Lightroom and stitched together using Photoshop and Hugin.

Internet-Machine-production-02

The footage was then converted into 3D scenes using camera calibration techniques, so that entirely new camera movements could be created with a virtual three-camera rig. The final multi-screen installation is played out in 4K projected across three screens.

There are more photos available at Flickr.

Internet machine is part of BIG BANG DATA, open from 9 May 2014 until 26 October 2014 at CCCB (Barcelona) and from February-May 2015 at Fundación Telefónica (Madrid).

Internet Machine is produced by Timo Arnall, Centre de Cultura Contemporània de Barcelona – CCCB, and Fundación Telefónica. Thanks to José Luis de Vicente, Olga Subiros, Cira Pérez and María Paula Baylac.

Touch

Posted on Sep 4, 2006 in Mobility, Project, Research, Technology, Ubicomp

The address book desk

Posted on Dec 16, 2005 in Interaction design, Project, Research, Ubicomp

Underneath the desk I have stuck a grid of RFID tags, and on the top surface, the same grid of post-it notes. With the standard Nokia Service Discovery application it is possible to call people, send pre-defined SMSes or load URLs by touching the phone to each post-it on the desk. On the post-its themselves I have hand-written the function, message and the recipient. This is somewhat like a cross between a phone-book, to-do list and temporary diary, with notes, scribbles and tea stains alongside names.

Initial ideas were to spraypaint or silkscreen some of the touch icons to the desk surface, and I may well do that at some point. But for quick prototyping it made sense to use address labels or post-it notes that can be stuck, re-positioned and layered with hand-written notes.

This is an initial step in thinking about the use of RFID and mobile phones, a way of thinking through making. In many ways it is proving to be more inconvenient than the small screen (particularly with the occasionally unreliable firmware on this particular cover, I can’t speak for the production version). But it has highlighted some really interesting issues.

First of all it has brought to the forefront the importance of implicit habits. Initially, it took a real effort to think about the action of using the table as an interface: I would reach for the phone and press names to make a call, instead of placing it on the desk. But for some functions, such as sending an SMS, it has become more habitual.

SMSes have become more like ‘pings’ when very little effort is made to send them. At the same time they are more physically tangible: I rest the phone in a certain position on the desk and wait for it to complete an action. The most useful functions have been “I’m here” or “I’m leaving” messages to close friends.

I have had to consider the ‘negative space’ where the mobile must rest without any action. This space has potential be used for context information; a corner of the table could make my phone silent, another corner could change my presence online. Here it would be interesting to refer to Jan Chipchase’s ideas around centres of gravity and points of reflection, it’s these points that could be most directly mapped to behaviour. I’m thinking about other objects and spaces that might be appropriate for this, and perhaps around the idea of thoughtless acts.

If this was a space without wireless internet, I could also imagine this working very well for URLs: quick access to google searches, local services or number lookups, which is usually very tricky on a small screen. Here it would be interesting to think about how the mobile is used in non-connected places, such as the traditional Norwegian Hytte [pdf].

This process also raised a larger issue around the move towards tangible or physical information, which also implies a move towards the social. As I was making the layout of my address book and associated functions, I realised that maybe these things shouldn’t be explicit, visible, social objects. The arrangement of people within the grid makes personal sense; the placement is a personal preference and maps in certain ways to frequency and type of contact. But I wonder how it appears to other people when this pattern is exposed. Will people be offended by my layout? What if I don’t include a rarely called contact? Are there numbers I want to keep secret, hidden behind acronyms in the ‘Names’ menu?

It will be interesting to see how this plays out and changes over time, particularly in the reaction of others. I’ll post more about the use of NFC in other personal contexts in the near future.

h3. The making of…

The desk is made from 20 mm birch ply, surfaced in Linoleum. I stuck a single RFID to the underside, in the place that felt most natural. A 10 cm grid was worked out from that point, and the RFIDs were stuck in that grid, and the same worked out on top. If I were to re-build the desk with this project in mind, the tags should probably be layered close to the surface, between the ply and Linoleum. This would make them slightly more responsive to touch by giving them a larger read/write distance.

p(caption). Rewriteable 512 bit, Philips MiFare UL stickers.

p(caption). 10 cm grid of tags on the underside of the desk.

p(caption). Blank post-it notes on the surface, with the same grid.

More photos at Flickr.

Graphic language for touch

This work explores the visual link between information and physical things, specifically around the emerging use of the mobile phone to interact with RFID or NFC. It was a presentation and poster at Design Engaged, Berlin on the 11th November 2005.

Download the icons (PDF, 721KB, Gif preview).

As mobile phones are increasingly able to read and write to RFID tags embedded in the physical world, I am wondering how we will appropriate this for personal and social uses.

I’m interested in the visual link between information and physical things. How do we represent an object that has digital function, information or history beyond it’s physical form? What are the visual clues for this interaction? We shouldn’t rely on a kind of mystery meat navigation (the scourge of the web-design world) where we have to touch everything to find out it’s meaning.

This work doesn’t attempt to be a definitive system for marking physical things, it is an exploratory process to find out how digital/physical interactions might work. It uncovers interesting directions while the technology is still largely out of the hands of everyday users.

h3. Reference to existing work

Visual references

p(caption). Click for larger version.

The inspiration for this is in the marking of public space and existing iconography for interactions with objects: push buttons on pedestrian crossings, contactless cards, signage and instructional diagrams.

This draws heavily on the substantial body of images of visual marking in public space. One of the key findings of this research was that visibility and placement of stickers in public space is an essential part of their use. Current research in ubicomp and ‘locative media’ is not addressing these visibility issues.

There is also a growing collection of existing iconography in contactless payment systems, with a number of interesting graphic treatments in a technology-led, vernacular form. In Japan there are also instances of touch-based interactions being represented by characters, colours and iconography that are abstracted from the action itself.

I have also had great discussions with Ulla-Maaria Mutanen and Jyri Engestr�m who have been doing interesting work with thinglinks and the intricate weaving of RFID into craft products.

h3. Development

rfid_iconography_circles.gif

Sketching and development revealed five initial directions: circles, wireless, card-based, mobile-based and arrows (see the poster for more details). The icons range from being generic (abstracted circles or arrows to indicate function) to specific (mobile phones or cards touching tags).

Arrows might be suitable for specific functions or actions in combinations with other illustrative material. Icons with mobile phones or cards might be helpful in situations where basic usability for a wide range of users is required. Although the ‘wireless’ icons are often found in current card readers, they do not successfully indicate the touch-based interactions inherent in the technology, and may be confused with WiFi or Bluetooth. The circular icons work at the highest level, and might be most suitable for generic labelling.

rfid_iconography_circles.gif

For further investigation I have selected a simple circle, surrounded by an ‘aura’ described by a dashed line. I think this successfully communicates the near field nature of the technology, while describing that the physical object contains something beyond its physical form.

rfid_iconography_2circle.gif

In most current NFC implementations, such as the 3220 from Nokia and many iMode phones, the RFID reader is in the bottom of the phone. This means that the area of ‘activation’ is obscured in many cases by the phone and hand. The circular iconography allows for a space to be marked as ‘active’ by the size of the circle, and we might see it used to mark areas rather than points. Usability may improve when these icons are around the same size as the phone, rather than being a specific point to touch.

h3. Work in progress

This is early days for this technology, and this is work-in-progress. There is more to be done in looking at specific applications, finding suitable uses and extending the language to cover other functions and content.

Until now I have been concerned with generic iconography for a digitally augmented object. But this should develop into a richer language, as the applications for this type of interaction become more specific, and related to the types of objects and information being used. For example it would be interesting to find a graphic treatment that could be applied to a Pokemon sticker offering power-ups as well as a bus stop offering timetable downloads.

I’m also interested in the physical placement of these icons. How large or visible should they be? Are there places that should not be ‘active’? And how will this fit with the natural, centres of gravity of the mobile phone in public and private space.

I’ll expand on these things in a few upcoming projects that explore touch-based interactions in personal spaces.

Feel free to use and modify the icons, I would be very interested to see how they can be applied and extended.

h3. Visual references

Oyster Card, Transport for London.
eNFC, Inside Contactless.
Paypass, Mastercard.
ExpressPay, American Express.
FeliCa, Sony.
MiFare, various vendors.
Suica, JR, East Japan Railway Company.
RFID Field Force Solutions, Nokia.
NFC shell for 3220, Nokia.
ERG Transit Systems payment, Dubai.
Various generic contactless vendors.
Contactless payment symbol, Mastercard.
Open Here, Paul Mijksenaar, Piet Westendorp, Thames and Hudson, 1999.
Understanding Comics, Scott McCloud, Harper, 1994

Time that land forgot

There are two versions: a “low-bandwidth”:/timeland/noimages.html no-image version and a “high-bandwidth”:/timeland/ version with images. There is also a “Quicktime movie”:http://polarfront.org/time_land_forgot.mov for people that can’t run Flash at a reasonable frame rate.

We have made the “source code”:http://www.polarfront.org/timeland.zip (.zip file) available for people that want to play with it, under a General Public License (GPL).

h2. Background: Narrative images and GPS tracks

Over the last five years Timo has been photographing daily experience using a digital camera and archiving thousands of images by date and time. Transient, ephemeral and numerous; these images have become a sequential narrative beyond the photographic frame. They sit somewhere between photography and film, with less emphasis on the single image in re-presenting experience.

For the duration of the workshop Timo used a GPS receiver to record tracklogs, capturing geographic co-ordinates for every part of the journey. It is this data that we explore here, using it to provide a history and context to the images.

This project is particularly relevant as mobile phones start to integrate location-aware technology and as cameraphone image-making becomes ubiquitous.

h2. Scenarios

We discussed the context in which we were creating an application: who would use it, and what would they be using it for? In our case, Timo is using the photographs as a personal diary, and this is the first scenario: a personal life-log, where visualisations help to recollect events, time-periods and patterns.

Then there is the close network of friends and family, or participants in the same journey, who are likely to invest time looking at the system and finding their own perspective within it. Beyond that there is a wider audience interested in images and information about places, that might want a richer understanding of places they have never been, or places that they have experienced from a different perspective.

Images are immediately useful and communicative for all sorts of audiences, it was less clear how we should use the geographic information, the GPS tracks might only be interesting to people that actually participated in that particular journey or event.

h2. Research

We looked at existing photo-mapping work, discovering a lot of projects that attempted to give images context by placing them within a map. But these visualisations and interfaces seemed to foreground the map over the images and photos embedded in maps get lost by layering. The problem was most dramatic with topographic or street maps full of superfluous detail, detracting from the immediate experience of the image.

Even the exhaustive and useful research from Microsoft’s “World Wide Media Index (WWMX)”:http://wwmx.org/ arrives at a somewhat unsatisfactory visual interface. The paper details five interesting mapping alternatives, and settles on a solution that averages the number of photos in any particular area, giving it a representatively scaled ‘blob’ on a street map (see below). Although this might solve some problems with massive data-sets, it seems a rather clunky interface solution, overlooking something that is potentially beautiful and communicative in itself.

!/images/photomap_wwmx.jpg!

p(caption). See “http://wwmx.org/docs/wwmx_acm2003.pdf”:http://wwmx.org/docs/wwmx_acm2003.pdf page 8

Other examples (below) show other mapping solutions; Geophotoblog pins images to locations, but staggers them in time to avoid layering, an architectural map from Pariser Platz, Berlin gives an indication of direction, and an aerial photo is used as context for user-submitted photos at Tokyo-picturesque. There are more examples of prior work, papers and technologies “here”:http://www.elasticspace.com/index.php?id=44.

!/images/photomap_berlin.jpg!

p(caption). Image from “Pariser Platz Berlin”:http://www.fes.uwaterloo.ca/u/tseebohm/berlin/map-whole.html

!/images/photomap_geophotoblog.jpg!

p(caption). Image from “geophotoblog”:http://brainoff.com/geophotoblog/plot/

!/images/photomap_tokyo.jpg!

p(caption). Image from “Tokyo Picturesque”:http://www.downgoesthesystem.com/devzone/exiftest/final/

By shifting the emphasis to location the aspect most clearly lacking in these representations is _time_ and thereby also the context in which the images can most easily form narrative to the viewer. These images are subordinate to the map, thereby removing the instant expressivity of the image.

We feel that these orderings make spatially annotated images a weaker proposition than simple sequential images in terms of telling the story of the photographer. This is very much a problem of the seemingly objective space as contained by the GPS coordinates versus the subjective place of actual experience.

h2. Using GPS Data

We started our technical research by looking at the data that is available to us, discovering data implicit in the GPS tracks that could be useful in terms of context, many of which are seldom exposed:

* location
* heading
* speed in 3 dimensions
* elevation
* time of day
* time of year

With a little processing, and a little extra data we can find:

* acceleration in 3 dimensions
* change in heading
* mode of transportation (roughly)
* nearest landmark or town
* actual (recorded) temperature and weather
* many other possibilities based on local, syndicated data

Would it be interesting to use acceleration as a way of looking at photos? We would be able to select arrivals and departures by choosing images that were taken at moments of greatest acceleration or deceleration. Would these images be the equivalent of ‘establishing’, ‘resolution’ or ‘transition’ shots in film, generating a good narrative frame for a story?

Would looking at photos by a specific time of day give good indication of patterns and habits of daily life? The superimposition of daily unfolding trails of an habitual office dwelling creature might show interesting departures from rote behaviour.

h2. Using photo data

By analysing and visualising image metadata we wanted to look for ways of increasing the expressive qualities of a image library. Almost all digital images are saved with the date and time of capture but we also found unexplored tags in the EXIF data that accompany digital images:

* exposure
* aperture
* focus distance
* focal length
* white balance

We analysed metadata from almost 7000 photographs taken between 18 February – 26 July 2004 to see patterns that we might be able to exploit for new interfaces. We specifically looked for patterns that helped identify changes over the course of the day.

!/images/photomap_EXIF_small.gif!:/images/photomap_EXIF_large.gif

p(caption). Shutter, Aperture, Focal length and File size against time of day (click for larger version)

This shows an increase in shutter speed and aperture during the middle of the day. The images also become sharper during daylight hours, indicated by an increased file-size.

!/images/photomap_times_small.gif!:/images/photomap_times_large.gif

p(caption). Date against time of day (click for larger version)

This shows definite patterns: holidays and travels are clearly visible (three horizontal clusters towards the top) as are late night parties and early morning flights. This gives us huge potential for navigation and interface. Image-based ‘life-log’ applications like “Flickr”:http://www.flickr.com and “Lifeblog”:http://www.nokia.com/lifeblog are appearing, the visualisation of this light-weight metadata will be invaluable for re-presenting and navigating large photographic archives like these.

Matias Arje – also at the Iceland workshop – has done “valuable work”:http://arje.net/locative/ in this direction.

h2. Technicalities

Getting at the GPS and EXIF data was fairly trivial though it did demand some testing and swearing.

We are both based on Apple OS X systems, and we had to borrow a PC to get the tracklogs reliably out of the Timo’s GPS and into Garmin’s Mapsource. We decided to use GPX as our format for the GPS tracks, GPSBabel happily created this data from the original Garmin files.

The EXIF was parsed out of the images by a few lines of Python using the EXIF.py module and turned into another XML file containing image file name and timestamp.

We chose Flash as the container for the front end, it is ubiquitous and Even’s programming poison of choice for visualisation. Flash reads both the GPX and EXIF XML files and generates the display in real-time.

More on our choices of technologies “here”:http://www.elasticspace.com/2004/07/geo-referenced-photography.

h2. First prototype

!/images/timeland_screenshot06.jpg!:/timeland/

“View prototype”:http://www.elasticspace.com/timeland/

Mirroring Timo’s photography and documentation effort, Even has invested serious time and thought in “dynamic continous interfaces”:http://www.polarfront.org. The first prototype is a linear experience of a journey, suitable for a gallery or screening, where images are overlaid into textural clusters of experience. It shows a scaling representation of the travel route based on the distance covered the last 20-30 minutes. Images recede in scale and importance as they move back in time. Each tick represents 1 minute, every red tick represents an hour.

We chose to create a balance of representation in the interface around a set of prerogatives: first image (for expressivity), then time (for narrative), then location (for spatialising, and commenting on, image and time).

In making these interfaces there is the problem of scale. The GPS data itself has a resolution down to a few meters, but the range of speeds a person can travel at varies wildly through different modes of transportation. The interface therefore had to take into account the temporo-spatial scope of the data and scale the resolution of display accordingly.

This was solved by creating a ‘camera’ connected to a spring system that attempts to center the image on the advancing ‘now’ while keeping a recent history of 20 points points in view. The parser for the GPS tracks discards the positional data between the minutes and the animation is driven forward by every new ‘minute’ we find in the track and that is inserted into the view of the camera. This animation system can both be used to generate animations and interactive views of the data set.

There are some issues with this strategy. There will be discontinuities in the tracklogs as the GPS is switched off during standstill and nights. Currently the system smoothes tracklog time to make breaks seem more like quick transitions.

The system should ideally maintain a ‘subjective feeling’ of time adjusted to picture taking and movement; a temporal scaling as well as a spatial scaling. This would be an analog to our own remembering of events: minute memories from double loop roller-coasters, smudged holes of memory from sleepy nights.

Most of the tweaking in the animation system went into refining the extents system around the camera history & zoom, acceleration and friction of spring systems and the ratio between insertion of new points and animation ticks.

In terms of processing speed this interface should ideally have been built in Java or as a stand alone application, though tests have shown that Flash is able to parse a 6000 point tracklog, and draw it on screen along with 400 medium resolution images. Once the images and points have been drawn on the canvas they animate with reasonable speed on mid-spec hardware.

h2. Conclusions

This prototype has proved that many technical challenges are solvable, and given us a working space to develop more visualisations, and interactive environments, using this as a tool for thinking about wider design issues in geo-referenced photography. We are really excited by the sense of ‘groundedness’ the visualisation gives over the images, and the way in which spatial relationships develop between images.

For Timo it has given a new sense of spatiality to image making, the images are no longer locked into a simple sequential narrative, but affected by spatial differences like location and speed. He is now experimenting with more ambient recording: taking a photo exactly every 20 minutes for example, in an effort to affect the presentation.

h2. Extensions

Another strand of ideas we explored was using the metaphor of a 16mm “Steenbeck”:http://www.harvardfilmarchive.org/gallery/images/conservation_steenbeck.jpg edit deck: scrubbing 16mm film through the playhead and watching the resulting sound and image come together: we could use the scrubbing of an image timeline, to control all of the other metadata, and give real control to the user. It would be exciting to explore a spatial timeline of images, correlated with contextual data like the GPS tracks.

We need to overcome the difficulty obtaining quality data, especially if we expect this to work in an urban environment. GPS is not passive, and “requires a lot of attention to record tracks”:http://www.elasticspace.com/index.php?id=4. Overall our representation doesn’t require location accuracy, just consistency and ubiquity of data; we hope that something like cell-based tracking on a mobile phone becomes more ubiquitous and usable.

We would like to experiment further with the extracted image metadata. For large-scale overviews, images could be replaced by a simple rectangular proxy, coloured by the average hue of the original picture and taking brightness (EV) from exposure and aperture readings. This would show the actual brightness recorded by the camera’s light meter, instead of the brightness of the image.

Imagine a series of images from bright green vacation days, dark grey winter mornings or blue Icelandic glaciers, combined with the clusters and patterns that time-based visualisation offers.

We would like to extend the data sets to include other people: from teenagers using gps camera phones in Japan to photojournalists. How would visualisations differ, and are there variables that we can pre-set for different uses? And how would the map look with multiple trails to follow, as a collaboration between multiple people and multiple perspectives?

At a technical level it would be good to have more integration with developing standards: we would like to use “Locative packets”:http://locative.net/workshop/index.cgi?Locative_Packets, just need more time and reference material. This would make it useful as a visualisation tool for other projects, “Aware”:http://aware.uiah.fi/ for example.

We hope that the system will be used to present work from other workshops, and that an interactive installation of the piece can be set up at “Art+Communication”:http://rixc.lv/04/.

h2. Biographies

Even Westvang works between interaction design, research and artistic practice. Recent work includes a slowly growing digital organism that roams the LAN of a Norwegian secondary school and an interactive installation for the University of Oslo looking at immersion, interaction and narrative. Even lives and works in Oslo. His musings live on “polarfront.org”:http://www.polarfront.org and some of his work can be seen at “bengler.no”:http://www.bengler.no.

Timo Arnall is an interaction designer and researcher working in London, Oslo and Helsinki. Recent design projects include a social networking application, an MMS based interactive television show and a large media archiving project. Current research directions explore mapping, photography and marking in public places. Work and research can be seen at “elasticspace.com”:http://www.elasticspace.com.

h2. Screenshots

!/images/timeland_screenshot01.jpg!

!/images/timeland_screenshot04.jpg!

!/images/timeland_screenshot08.jpg!

!/images/timeland_screenshot11.jpg!

!/images/timeland_screenshot12.jpg!

!/images/timeland_screenshot16.jpg!

!/images/timeland_screenshot19.jpg!

!/images/timeland_screenshot20.jpg!

Photography and mapping from Afar

Posted on Jul 28, 2004 in Art, Conferences, Mapping, Narrative, Photography, Place, Project, Travel

h3. Synopsis

Exploring the space of narrative, images and personal geography. For three months I recorded every walk, drive, train journey and flight I took, while photographing spaces and places from daily life.

The project is the first step towards a visual language for spatially located imagery, looking at ways in which personal travelogues can become useful as communication and artefacts of personal memory.

h3. Description

Nine boards, four images each, sit above maps that provide spatial context. Each image is captioned with location information and a key linking it to a point on the map below. The images show spatial transition from one country to another, and a change of season.

The maps are GPS tracks, visualised as simple lines. The scale of the map is decided by the extents of the image locations. This effectively shows a transition from London to Oslo, over the period of a few months. The maps give an interesting sense of transition, scale and movement are emphasised.

!/images/afar_photo_map04.jpg!:/images/afar_photo_map00.jpg

p(caption). All maps in sequence (click for full size image)

!/images/afar_photo_map05.jpg!

p(caption). All images in sequence

!/images/afar_photo_map06.jpg!

p(caption). Images (detail)

!/images/afar_photo_map07.jpg!

p(caption). Maps (detail)

h3. About the exhibition

AFAR is an exhibition where 25 international artists have been asked to produce work in accordance with the word ‘afar’. The initial intention was to establish a connection between diverse artistic and creative forms that the invited originate from: architecture, dance, street art, design, audio, photography, VJing, video art, fashion design, painting and creative writing.

The exhibition was in Rhuset, Copenhagen, Denmark, from 8 – 23 July 2004.

Mobile interaction design case study

Pollen Mobile develops location-based services for the consumer and business markets. Mamjam is their first product: a location based, social entertainment service based on Short Messaging Service (SMS) messages. It enables people in the same venue to chat with each other by sending text messages from their mobile phones. Mamjam interface: user match screen 1 Mamjam interface: user match screen 2

h2. Brief

Pollen approached us with a very broad intention to use SMS to drive social interaction and entertainment in new ways.

We initially developed three quirky ideas based on playground games, internet chat, and community storytelling that we presented as the basis for discovering business goals and user-needs.

After our initial brainstorm, we initiated a more rigorous user-centred, interaction design process that is detailed in this case-study.

h2. Handsets & Networks

We found several pivotal issues we needed to resolve: SMS has extremely limited functions; with few opportunities to create rich, engaging, extended interactions.

h3. Handsets

Mobile phone handsets provide no navigation between multiple messages, no indication of user status or location, and have no practical means of viewing session history. Users are accustomed to using SMS for quick functional communication, and extended contact with friends. They certainly do not rely on messages for any kind of complex interaction.

Every transaction between user and server on a mobile phone is a sessionless operation. Each message contains only the time it was sent, the number it was sent from and the content of the message [1].

Unlike http systems, the server cannot rely on location and session information being stored in the message address. This is complex from a user experience perspective because people are used to responses exhibited by systems that do carry session information and behave quite differently [2].

h3. Networks

SMS messages are managed by the networks with cells, each cell carries messages particular to that region. Cells are notoriously unreliable, and we found that it was common for messages to hang in the system for over ten minutes. This presented some serious problems. Satisfying communications rely on a high level of continuity, and the timing between messages is a critical indicator of the emotional state of your chatting partner.

Mamjam’s service is location based: users are in contact with other users in the same area. However the existing (second generation) handsets cannot determine location, and although locations are triangulated by the network, this information is not publicly available. The location thus had to be manually provided by the user; in a way that then could be usefully interpreted by the server.

Researching and developing a reliable language for users to identify their location became central to the interaction design problem.

Many competing SMS services are currently internet-based: requiring a signup for services from a web site, rather than directly from handsets.

h3. Modes

A system like this could conceivably be built without the use of modes [3]. From the users perspective a modeless system could be overly complex and exhausting: every message must somehow include exact commands and instructions for the server. But a modeless system is very attractive from a technical perspective: the server is more likely to correctly interpret instructions.

h2. Process

h3. Requirements

We consulted with Pollen and selected SMS users to draw up several personas and scenarios. This included contextual enquiry, business goals and user-requirements gathering. We identified the following requirements:

* Users must be able to join the service immediately, not just from a website prior to use.
* The service should accommodate both new and returning users.
* Users are likely to be exposed to the service through all sorts of channels, and therefore signing up should accommodate all points of entry.
* The structure should be designed to accommodate expansion of the service.
* The basic structure of the handshake should carry to other SMS systems Pollen may choose to develop.

h3. First Iteration

The initial interaction architecture outlines our first intentions for the system. (For legal reasons we can’t include the full size diagram.)

mamjam interaction design version 1

The system works in a similar way to internet based chat rooms, connecting users who are ‘online’ at the same time, with the extra dimension that they are in the same physical place. Mamjam supports private, one-to-one communications only: users can’t shout to groups or broadcast messages. Once a user has found a chatting partner the system simply directs the text traffic between them until one party decides to pursue some one else, or signs off.

This structure required users to enter a lot of information about themselves before they could initiate contact with one another. We felt this was valuable in order to reduce the interaction load while chatting. This also resulted from a (perhaps misguided) adherence to the ‘internet chat room’ model.

This system was implemented on Pollen’s test servers, and we organised user-testing sessions. This revealed several problems:

The sign up process was off putting. Users motivation for this product is for entertainment and social contact: they weren’t happy to tolerate a lengthy sign-up process. This architecture required four messages for a new user to sign up. In some cases the user would be spending the equivalent of a 10 minute voice-call before they had connected with someone to chat. It was clear that the service needed to offer a quick method of signing up, perhaps at the expense of more advanced features.

In trying to optimise the system for both new and advanced users; signing up for the first time required a different interaction process from signing up for a second time. There were also several different methods of identifying your location to anticipate every possible user-interaction. There were thus four or five possible entry points into the system. This caused more modal problems than anybody anticipated; the SMS server had to process language and match patterns in an almost infinite realm of possibilities.

h3. Second Iteration

It became clear that the three biggest problems for the social interaction process were:

* Aligning the systems perception of user-context with actual user-context.
* Ensuring users have an accurate perception of the system state.
* Maintaining a rich connection between users, allowing them to interpret and react to one another accurately.

This discrepancy between user perception and system perception can be referred to as ‘slippage’. Slippage is most problematic during the initial handshake when the user is most insecure about their request and about the system itself.

Text messages to and from SMS servers rarely arrive as punctually as they seem to in normal use. This meant it was possible for one of two users, both having agreed to start chatting, to reject the other on the basis that they had failed to reply to their confirmation. In fact the rejected user had replied with confirmation, but their message had been delayed. The message would then arrive with the first user who had since moved to a new part of the interaction process. Their reply could potentially interrupt another process or get lost in the system, confusing and infuriating both users. Serious slippage!

We also found, as predicted, that users did not read back through their old messages. Some phones have a very limited capacity for storing messages and no phone facilitates simple navigation of previous messages, so the current message was the only one through which we could usefully rely upon for users to react to.

mamjam interaction design version 2

The second interaction architecture was developed with the problems described above in mind. Some changes have been made to the system since, mostly around modal issues, and the commands through which users communicate with the server. Although there are still issues regarding slippage, the second iteration makes this much less of a problem. The system is basically modeless, except for the first transaction. All users (new and existing) enter the system in the same way, new users are chatting within two messages, and existing users are potentially chatting after their first message.

For an overview of the commands and interactions possible with the system look at the Mamjam How To and Advanced Features.

h2. In Use

Mamjam is now fully operational, spinning off other services based on the basic interaction architectures we designed for the initial chat service.

h3. Extended Services

In a recent, typical promotion, at the Mood Bar in Carlisle, Mamjam sent a message to people who had Mamjamed there, offering them a discounted drink if they showed their mobile at the bar. The conversion rate from message sent to offer redeemed was 30%.

h3. Building relationships, Community and Storytelling

Having heard that a large number of people were texting their ex-partners late at night; under the influence, Mamjam sent a message asking for their own dating disasters. 13% of people responded with their own story by SMS; 50% of those responding within the first hour.

These users were not given incentives like promotional offers, the call to action was not a simple generic mechanic like reply YES or NO; it was much more involved. Users were required to read and understand the message received, then conceive and craft a response to fit into 160 characters. Yet the response was high and the quality of response excellent.

h3. Stimulating usage

By reminding BT users of a free messaging offer, the objectives are to stimulate Mamjaming outside of the locations in which they first Mamjamed.

p(quote). Message: Spice up your text life for FREE! Mamjam is still FREE to receive for BT users. To chat now just reply with mamjam and your location eg MAMJAM LONDON.

7% of the database of BT users read the message, and then decided to log on to Mamjam. Between them on that day they sent 3,400 chat messages.

h3. Some usage statistics

* First time Mamjam users begin chatting by sending only 2 SMS messages.
* Users are matched with someone within 120 seconds of logging onto the service for the first time.
* The average Mamjam user sends and receives 24 SMS messages per session.
* The top 10% of users send 60 SMS per month and generate an additional 72 outbound messages. Generating an additional 18 for the network operators.
* The top 50% of users send 20 SMS per month and generate an additional 24 outbound messages, generating 6.30 of revenue for the network operator.
* Repeat usage: 30% of daily users are repeat users.

h2. Conclusions

We think that the best solution to this particular service has been found, given the limitations detailed above.

There are obvious and not-so-obvious limitations to SMS communications. The most notable limitations are the handsets continuing to rely on short messaging rather than a more advanced chat service, and the network operators inability to develop services and platforms outside of their own internal structures.

This research and product development has generated a lot of further ideas for asynchronous communication structures, and communication solutions for packet switched networks for mobile devices.

h2. Footnotes

[1] Some phones support greater functionality than others, Mamjam needed to support a broad demographic so only the most bottom-line functionality was available to us.

When sending a message from a server it can be set to “Flash” mode, causing the message to open in the users phone immediately. Some cells also support a “broadcast to cell” function, whereby a single message can be sent to all phones within that cell. This function is expensive and only available to phones on a given network. back

[2] Information transferred with HTTP is also sessionless, but browsers and servers are afforded with functionality to help them overcome modal issues, like cookies, history bars and links for example. There are other interface restrictions to consider regarding the manipulation of text like the absence of cutting and pasting for example. back

[3] The most comprehensive discussion of modes I have come across is in The Humane Interface by Jef Raskin , pp37. back

h2. References and Links

At the time of writing the Mamjam numbers are 82888 (BT/Vodafone) or 07970 158 158 (all other networks). Just send any text message to sign up and test it for yourself.

* Mamjam website
* Pollen Mobile
* Mamjam reviewed at The Guardian
* Jef Raskin ,”Modes 3-2″ The Humane Interface , 2000, pp37.

h2. Professional Credits

h3. Interaction design

Jack Schulze Adi Nachman Timo Arnall

h3. Technical Architecture

John Gillespie

h3. Information Design

Jack Schulze