Mother. Father. Always you wrestle inside me.

Posted on Jul 10, 2011 in Film, Photography

A truth that releases a waterfall of emotion. It is this energy that propels us through The Tree of Life. A voluptuous, bulging energy shaped and encouraged by sweeping camera movement, ultra wide lenses, lyrical blocking, the safe-harbor of Jessica Chastain’s face, and the vacillation in Hunter McCracken’s. These combine to create scenes that perfectly capture the rapturous feelings of childhood. Sensations evoked when light & dark entwine, and our instinctual knowledge that these things are the same.

And on how to approach the film:

A moment long enough for me to relax, and I was suddenly taken by a feeling of great tenderness and calm. I don’t completely understand why I felt this, but the inclusion of these CGI dinosaurs struck me as an particularly affectionate and loving decision. Terrence Malick believes in his audiences, and has faith that we also can believe. It’s the feeling of your mother brushing the hair off your forehead as she tells you a bedtime story. You protest because she’s changed a part of the usual tale, or it’s not the way you want it to be, but smiling, she says “Shhh shhh. Just listen.”

From the brilliant Kartina Richardson.

Chronocyclegraph of bricklaying

Posted on Jun 21, 2010 in Film, Photography

By Frank Gilbreth (1912)

Via lecture 4: traces at light matters.

CCD and computational photography

Posted on Mar 18, 2010 in Media, Photography, Technology, Ubicomp

A few links on imaging and computation:

I’ve concluded that the promise of RFID was eclipsed by another technology out there that’s poised to become more and more disruptive, not only to RFID, but to a host of technologies, and that’s the CCD.

from CCD by Joe Gregorio. Via BERG.

Cameras might allow a photographer to record a scene and then alter the lighting or shift the point of view, or even insert fictitious objects.

from Computational Photography, American Scientist

The camera as a device you carry has completely disappeared. Image sensors have become part of the literal fabric of everyday life.

from What Photography Will Look Like By 2060

Crossing Borders

Posted on Feb 2, 2010 in Film, Photography

A visualization of private spaces in public photography. A design probe on digital mannerism by choy ka fai

Via BERG.

Graffiti as conversation

Posted on Feb 28, 2005 in Photography, Place, Urbanism

Phone conversation

I’ve been photographing layers of conversation in graffiti, and tagging the pictures with conversation. Prior art for spatial annotation?

Spatial memory at Design Engaged 2004

Notes on two related projects:

1. Time that land forgot

  • A project in collaboration with Even Westvang
  • Made in 10 days at the Icelandic locative media workshop, summer 2004
  • Had the intention of making photo archives and gps trails more useful/expressive
  • Looked at patterns in my photography: 5 months, 8000 photos, visualised them by date / time of day. Fantastic resource for me: late night parties, early morning flights, holidays and the effect of midnight sun is visible.
  • time visualisation

    2. Marking in urban public space

    I’ve also been mapping stickering, stencilling and flyposting: walking around with the camera+gps and photographing examples of marking (not painted graffiti).

    This research looks at the marking of public space by investigating the physical annotation of the city: stickering, stencilling, tagging and flyposting. It attempts to find patterns in this marking practice, looking at visibility, techniques, process, location, content and audience. It proposes ways in which this marking could be a layer between the physical city and digital spatial annotation.

    Some attributes of sticker design

  • Visibility: contrast, monochromatic, patterns, bold shapes, repetition
  • Patina: history, time, decay, degredation, relevance, filtering, social effects
  • Physicality: residue of physical objects: interesting because these could easily contain digital info
  • Adaptation and layout: layout is usually respectful, innovative use of dtp and photocopiers, adaptive use of sticker patina to make new messages on top of old

    Layers of information build on top of each other, as with graffiti, stickers show their age through fading and patina, flyposters become unstuck, torn and covered in fresh material. Viewed from a distance the patina is evident, new work tends to respect old, and even commercial flyposting respects existing graffiti work.

    Techniques vary from strapping zip-ties through cardboard and around lampposts for large posters, to simple hand-written notes stapled to trees, and short-run printed stickers. One of the most fascinating and interactive techniques is the poster offering strips of tear-off information. These are widely used, even in remote areas.

    Initial findings show that stickers don’t relate to local space, that they are less about specific locations than about finding popular locations, “cool neighbourhoods” or just ensuring repeat exposure. This is opposite to my expectations, and perhaps sheds some light on current success/failure of spatial annotation projects.

    I am particularly interested in the urban environment as an interface to information and an interaction layer for functionality, using our spatial and navigational senses to access local and situated information.

    There is concern that in a dense spatially annotated city we might have an overload of information, what about filtering and fore-grounding of relevant, important information? Given that current technologies have very short ranges (10-30mm), we might be able to use our existing spatial skills to navigate overlapping information. We could shift some of the burden of information retrieval from information architecture to physical space.

    I finished by showing this animation by Kriss Salmanis, a young Latvian artist. Amazing re-mediation of urban space through stencilling, animation and photography. (“Un ar reizi naks tas bridis” roughly translates as “And in time the moment will come”.

    Footnotes/references

    Graffiti Archaeology, Cassidy Curtis
    otherthings.com/grafarc

    Street Memes, collaborative project
    streetmemes.com

    Spatial annotation projects list
    elasticspace.com/2004/06/spatial-annotation

    Nokia RFID kit for 5140
    nokia.com/nokia/0,,55739,00.html

    Spotcodes, High Energy Magic
    highenergymagic.com/spotcode

    ?Mystery Meat navigation?, Vincent Flanders
    fixingyourwebsite.com/mysterymeat.html

    RDF as barcodes, Chris Heathcote
    undergroundlondon.com/antimega/archives/2004_02.html

    Implementation: spatial literature
    nickm.com/implementation

    Yellow Arrow
    yellowarrow.org

Time that land forgot

There are two versions: a low-bandwidth no-image version and a high-bandwidth version with images. There is also a Quicktime movie for people that can’t run Flash at a reasonable frame rate.

We have made the source code (.zip file) available for people that want to play with it, under a General Public License (GPL).

Background: Narrative images and GPS tracks

Over the last five years Timo has been photographing daily experience using a digital camera and archiving thousands of images by date and time. Transient, ephemeral and numerous; these images have become a sequential narrative beyond the photographic frame. They sit somewhere between photography and film, with less emphasis on the single image in re-presenting experience.

For the duration of the workshop Timo used a GPS receiver to record tracklogs, capturing geographic co-ordinates for every part of the journey. It is this data that we explore here, using it to provide a history and context to the images.

This project is particularly relevant as mobile phones start to integrate location-aware technology and as cameraphone image-making becomes ubiquitous.

Scenarios

We discussed the context in which we were creating an application: who would use it, and what would they be using it for? In our case, Timo is using the photographs as a personal diary, and this is the first scenario: a personal life-log, where visualisations help to recollect events, time-periods and patterns.

Then there is the close network of friends and family, or participants in the same journey, who are likely to invest time looking at the system and finding their own perspective within it. Beyond that there is a wider audience interested in images and information about places, that might want a richer understanding of places they have never been, or places that they have experienced from a different perspective.

Images are immediately useful and communicative for all sorts of audiences, it was less clear how we should use the geographic information, the GPS tracks might only be interesting to people that actually participated in that particular journey or event.

Research

We looked at existing photo-mapping work, discovering a lot of projects that attempted to give images context by placing them within a map. But these visualisations and interfaces seemed to foreground the map over the images and photos embedded in maps get lost by layering. The problem was most dramatic with topographic or street maps full of superfluous detail, detracting from the immediate experience of the image.

Even the exhaustive and useful research from Microsoft’s World Wide Media Index arrives at a somewhat unsatisfactory visual interface. The paper details five interesting mapping alternatives, and settles on a solution that averages the number of photos in any particular area, giving it a representatively scaled ‘blob’ on a street map (see below). Although this might solve some problems with massive data-sets, it seems a rather clunky interface solution, overlooking something that is potentially beautiful and communicative in itself.

See http://wwmx.org/docs/wwmx_acm2003.pdf page 8

Other examples (below) show other mapping solutions; Geophotoblog pins images to locations, but staggers them in time to avoid layering, an architectural map from Pariser Platz, Berlin gives an indication of direction, and an aerial photo is used as context for user-submitted photos at Tokyo-picturesque. There are more examples of prior work, papers and technologies here.

Image from Pariser Platz Berlin

Image from geophotoblog

Image from Tokyo Picturesque

By shifting the emphasis to location the aspect most clearly lacking in these representations is time and thereby also the context in which the images can most easily form narrative to the viewer. These images are subordinate to the map, thereby removing the instant expressivity of the image.

We feel that these orderings make spatially annotated images a weaker proposition than simple sequential images in terms of telling the story of the photographer. This is very much a problem of the seemingly objective space as contained by the GPS coordinates versus the subjective place of actual experience.

Using GPS Data

We started our technical research by looking at the data that is available to us, discovering data implicit in the GPS tracks that could be useful in terms of context, many of which are seldom exposed:

  • location
  • heading
  • speed in 3 dimensions
  • elevation
  • time of day
  • time of year

    With a little processing, and a little extra data we can find:

  • acceleration in 3 dimensions
  • change in heading
  • mode of transportation (roughly)
  • nearest landmark or town
  • actual (recorded) temperature and weather
  • many other possibilities based on local, syndicated data

    Would it be interesting to use acceleration as a way of looking at photos? We would be able to select arrivals and departures by choosing images that were taken at moments of greatest acceleration or deceleration. Would these images be the equivalent of ‘establishing’, ‘resolution’ or ‘transition’ shots in film, generating a good narrative frame for a story?

    Would looking at photos by a specific time of day give good indication of patterns and habits of daily life? The superimposition of daily unfolding trails of an habitual office dwelling creature might show interesting departures from rote behaviour.

    Using photo data

    By analysing and visualising image metadata we wanted to look for ways of increasing the expressive qualities of a image library. Almost all digital images are saved with the date and time of capture but we also found unexplored tags in the EXIF data that accompany digital images:

  • exposure
  • aperture
  • focus distance
  • focal length
  • white balance

    We analysed metadata from almost 7000 photographs taken between 18 February – 26 July 2004 to see patterns that we might be able to exploit for new interfaces. We specifically looked for patterns that helped identify changes over the course of the day.

    Shutter, Aperture, Focal length and File size against time of day (click for larger version)

    This shows an increase in shutter speed and aperture during the middle of the day. The images also become sharper during daylight hours, indicated by an increased file-size.

    Date against time of day (click for larger version)

    This shows definite patterns: holidays and travels are clearly visible (three horizontal clusters towards the top) as are late night parties and early morning flights. This gives us huge potential for navigation and interface. Image-based ‘life-log’ applications like Flickr and Lifeblog are appearing, the visualisation of this light-weight metadata will be invaluable for re-presenting and navigating large photographic archives like these.

    Matias Arje – also at the Iceland workshop – has done valuable work in this direction.

    Technicalities

    Getting at the GPS and EXIF data was fairly trivial though it did demand some testing and swearing.

    We are both based on Apple OS X systems, and we had to borrow a PC to get the tracklogs reliably out of the Timo’s GPS and into Garmin’s Mapsource. We decided to use GPX as our format for the GPS tracks, GPSBabel happily created this data from the original Garmin files.

    The EXIF was parsed out of the images by a few lines of Python using the EXIF.py module and turned into another XML file containing image file name and timestamp.

    We chose Flash as the container for the front end, it is ubiquitous and Even’s programming poison of choice for visualisation. Flash reads both the GPX and EXIF XML files and generates the display in real-time.

    More on our choices of technologies here.

    First prototype

    View prototype

    Mirroring Timo’s photography and documentation effort, Even has invested serious time and thought in dynamic continous interfaces. The first prototype is a linear experience of a journey, suitable for a gallery or screening, where images are overlaid into textural clusters of experience. It shows a scaling representation of the travel route based on the distance covered the last 20-30 minutes. Images recede in scale and importance as they move back in time. Each tick represents 1 minute, every red tick represents an hour.

    We chose to create a balance of representation in the interface around a set of prerogatives: first image (for expressivity), then time (for narrative), then location (for spatialising, and commenting on, image and time).

    In making these interfaces there is the problem of scale. The GPS data itself has a resolution down to a few meters, but the range of speeds a person can travel at varies wildly through different modes of transportation. The interface therefore had to take into account the temporo-spatial scope of the data and scale the resolution of display accordingly.

    This was solved by creating a ‘camera’ connected to a spring system that attempts to center the image on the advancing ‘now’ while keeping a recent history of 20 points points in view. The parser for the GPS tracks discards the positional data between the minutes and the animation is driven forward by every new ‘minute’ we find in the track and that is inserted into the view of the camera. This animation system can both be used to generate animations and interactive views of the data set.

    There are some issues with this strategy. There will be discontinuities in the tracklogs as the GPS is switched off during standstill and nights. Currently the system smoothes tracklog time to make breaks seem more like quick transitions.

    The system should ideally maintain a ‘subjective feeling’ of time adjusted to picture taking and movement; a temporal scaling as well as a spatial scaling. This would be an analog to our own remembering of events: minute memories from double loop roller-coasters, smudged holes of memory from sleepy nights.

    Most of the tweaking in the animation system went into refining the extents system around the camera history & zoom, acceleration and friction of spring systems and the ratio between insertion of new points and animation ticks.

    In terms of processing speed this interface should ideally have been built in Java or as a stand alone application, though tests have shown that Flash is able to parse a 6000 point tracklog, and draw it on screen along with 400 medium resolution images. Once the images and points have been drawn on the canvas they animate with reasonable speed on mid-spec hardware.

    Conclusions

    This prototype has proved that many technical challenges are solvable, and given us a working space to develop more visualisations, and interactive environments, using this as a tool for thinking about wider design issues in geo-referenced photography. We are really excited by the sense of ‘groundedness’ the visualisation gives over the images, and the way in which spatial relationships develop between images.

    For Timo it has given a new sense of spatiality to image making, the images are no longer locked into a simple sequential narrative, but affected by spatial differences like location and speed. He is now experimenting with more ambient recording: taking a photo exactly every 20 minutes for example, in an effort to affect the presentation.

    Extensions

    Another strand of ideas we explored was using the metaphor of a 16mm Steenbeck edit deck: scrubbing 16mm film through the playhead and watching the resulting sound and image come together: we could use the scrubbing of an image timeline, to control all of the other metadata, and give real control to the user. It would be exciting to explore a spatial timeline of images, correlated with contextual data like the GPS tracks.

    We need to overcome the difficulty obtaining quality data, especially if we expect this to work in an urban environment. GPS is not passive, and requires a lot of attention to record tracks. Overall our representation doesn’t require location accuracy, just consistency and ubiquity of data; we hope that something like cell-based tracking on a mobile phone becomes more ubiquitous and usable.

    We would like to experiment further with the extracted image metadata. For large-scale overviews, images could be replaced by a simple rectangular proxy, coloured by the average hue of the original picture and taking brightness (EV) from exposure and aperture readings. This would show the actual brightness recorded by the camera’s light meter, instead of the brightness of the image.

    Imagine a series of images from bright green vacation days, dark grey winter mornings or blue Icelandic glaciers, combined with the clusters and patterns that time-based visualisation offers.

    We would like to extend the data sets to include other people: from teenagers using gps camera phones in Japan to photojournalists. How would visualisations differ, and are there variables that we can pre-set for different uses? And how would the map look with multiple trails to follow, as a collaboration between multiple people and multiple perspectives?

    At a technical level it would be good to have more integration with developing standards: we would like to use Locative packets, just need more time and reference material. This would make it useful as a visualisation tool for other projects, Aware for example.

    We hope that the system will be used to present work from other workshops, and that an interactive installation of the piece can be set up at Art+Communication.

    Biographies

    Even Westvang works between interaction design, research and artistic practice. Recent work includes a slowly growing digital organism that roams the LAN of a Norwegian secondary school and an interactive installation for the University of Oslo looking at immersion, interaction and narrative. Even lives and works in Oslo. His musings live on polarfront.org and some of his work can be seen at bengler.no.

    Timo Arnall is an interaction designer and researcher working in London, Oslo and Helsinki. Recent design projects include a social networking application, an MMS based interactive television show and a large media archiving project. Current research directions explore mapping, photography and marking in public places. Work and research can be seen at elasticspace.com.

    Screenshots

Photography and mapping from Afar

Posted on Jul 28, 2004 in Art, Conferences, Mapping, Narrative, Photography, Place, Project, Travel

Synopsis

Exploring the space of narrative, images and personal geography. For three months I recorded every walk, drive, train journey and flight I took, while photographing spaces and places from daily life.

The project is the first step towards a visual language for spatially located imagery, looking at ways in which personal travelogues can become useful as communication and artefacts of personal memory.

Description

Nine boards, four images each, sit above maps that provide spatial context. Each image is captioned with location information and a key linking it to a point on the map below. The images show spatial transition from one country to another, and a change of season.

The maps are GPS tracks, visualised as simple lines. The scale of the map is decided by the extents of the image locations. This effectively shows a transition from London to Oslo, over the period of a few months. The maps give an interesting sense of transition, scale and movement are emphasised.

All maps in sequence (click for full size image)

All images in sequence

Images (detail)

Maps (detail)

About the exhibition

AFAR is an exhibition where 25 international artists have been asked to produce work in accordance with the word ‘afar’. The initial intention was to establish a connection between diverse artistic and creative forms that the invited originate from: architecture, dance, street art, design, audio, photography, VJing, video art, fashion design, painting and creative writing.

The exhibition was in Rhuset, Copenhagen, Denmark, from 8 – 23 July 2004.

Geo-referenced photography

Posted on Jul 6, 2004 in Photography, Place, Research, Technology

The easiest way of linking photos to locations is to combine the time-stamps from both a digital camera and GPS receiver or other location-aware device. If this data is available (over the same period of time) it’s possible to process a series of images and location tracks to stamp each image with location metadata.

Here are a few resources, papers, projects, guidelines and other geo-reference issues.

Papers

  • GPS plotting in Flash

    GPS track and waypoint extraction

    Transferring data from GPS devices can be problematic. If this is going to work in a wider, collaborative context there is a need to make guidelines for this process. It is also really important to make sure units and timezones are correctly set up on all software, so that no translation happens as the data is converted. Exported data also tends to be messy, with mixed tracklogs and waypoints, which for us meant a lot of hand-tweaking.

  • Garmin Mapsource
  • MacGPS Pro
  • GPS Babel
  • GPSylon tool for downloading/viewing GPS data
  • GPS to GEO-RDF

Public markup

I have made a selection of research images over at Flickr, and more of the text and research will be online soon.

Load More