The Immaterials project

Posted on Sep 4, 2013 in Interaction design, Research

RFID-Touch-Project

The Immaterials project is concerned with the increasing invisibility of interfaces and infrastructures. The systems we interact with everyday such as WiFi and 3G networks have a profound impact on how we experience the world. As Adam Greenfield says:

the complex technologies the networked city relies upon to produce its effects remain distressingly opaque, even to those exposed to them on a daily basis. [...] it’s hard to be appropriately critical and to make sound choices in a world where we don’t understand the objects around us.

And as James Bridle has eloquently and disturbingly observed:

Those who cannot perceive the network cannot act effectively within it, and are powerless.

The project set out to expose some of the phenomena and mechanisms of technological infrastructures through visual, photographic, narrative, animated and cinematic techniques. Over the last five years I have worked with Einar Sneve Martinussen, Jørn Knutsen, Jack Schulze and Matt Jones towards a body of work that is now brought together in an exhibition for the first time.

From 2004–2008 I speculated about the ways in which wireless interactions inhabited physical space, through my work on a Graphic language for touch, and also through films such as Wireless in the world. Some of my students made beautiful but fictional speculations about the physical qualities of different kinds of radio.

Jack Schulze and I also made a short, playful film called Nearness about action at a distance. In the film, a series of simple reactions are set off by immaterial phenomena, such as radio waves, mobile networks, light, magnetism and wind.

In 2006 we ran a Touch workshop with BERG where we became concerned about the invisibility of RFID technology, and the effect that had on our ability to design with it. We found it extraordinary that a technology that was defined as a proximity or ‘touch’-based interface, was so opaque in terms of its physical, spatial, gestural materiality. How do we as designers make these materials visible, so we can have reflective conversations with them?

We developed Experiments in Field Drawing as a method of revealing, literally drawing, the physical presence of RFID interactions. We revealed these fields in a much richer, multi-dimensional way using photography, animation and light painting in the film Immaterials: Ghost in the Field.

Matt Jones coined the term immaterials to describe the project and gave a great talk about some ways of understanding the immaterials of interaction design. Matt and I also looked at machine vision, another phenomena that increasingly becomes a material for design in Robot Readable World.

In 2011 at AHO, as part of a research project called Yourban, we extended the investigations to WiFi, using similar light painting techniques we revealed the enormous scale and pervasiveness of ad-hoc WiFi networks in urban spaces in Immaterials: Light Painting WiFi.

Finally, over the last two years we’ve become increasingly interested in the Global Positioning System (GPS), that has become a central part of both the vision and the implementation of contemporary interfaces. We have built a series of Satellite Lamps that sense the presence of the 24 GPS satellites in orbit. The lamps change brightness according to the strength of GPS signals they receive, showing how the technology itself is messy and unpredictable, and revealing how GPS is a negotiation between radio waves, earth-orbit geometry and the urban environment.

Satellite Lamps is on display at Lighthouse, and a film will be available online later this year.

satellite_lamp_fjord

The visual languages that we’ve developed have ended up in advertising, on the BBC and Discovery Channel, and the techniques have been extended in research at MIT and CIID, and by many designers, enthusiasts and hackers. It’s exciting that both the subject and the methods are being taken up and used broadly by other people, and we’re looking forward to seeing more.

the truly pressing need is for translators: people capable of opening these occult systems up, demystifying them, explaining their implications to the people whose neighborhoods and choices and very lives are increasingly conditioned by them. — Adam Greenfield (2009)

The Immaterials project emerged from the humble preoccupations of a few designers dealing with some of the invisible, immaterial, intangible stuff we had in front of us. These small experiments led to larger and more visually and narratively communicative work. In the end what I think we’ve developed is an approach to technology that revolves around material exploration, explanation and communication. Because images and language, as well as materials, form our understandings of technology, Immaterials has shown how we can use ‘design and playful explorations to shape or stir the popular imagination’.

The exhibitions

All the Immaterials projects are on display at Lighthouse in Brighton from 5 September until 13 October 2013.

Satellite Lamps and Robot Readable World are on display at Dread in Amsterdam from 7 September until 24 November 2013.

Robot readable world

Posted on Feb 7, 2012 in Research

This is a short film, an experiment in machine-vision footage. It uses found-footage from computer vision research to explore how machines are making sense of the world.

As robots begin to inhabit the world alongside us, how do they see and gather meaning from our streets, cities, media and from us? Machines have a tiny, fractional view of our environment, that sometimes echoes our own human vision and often doesn’t.

Read more about the film and have a look at Matt Jones’ talk of the same title.

The future is Movie OS

Posted on Apr 18, 2010 in Research

Still from the film xXx from Mark Coleran’s portfolio.

The idea that Apple is grasping at real-life objects because they support effective visual storytelling is very interesting:

In Movie OS, visual storytelling is used to make the system’s important, critical reaction to a user’s action abundantly clear. In Movie OS, you know if you’re logging into Facebook.

I’d argue that visual storytelling doesn’t exist – if it does, it hardly exists at all – in computer or consumer eletronics user interfaces. The entire palette of visual storytelling in terms of interface, through accident of history, is purely engineering and control-led.

This is where, I’d say, Apple is grasping when it says that interfaces should sometimes look toward real-life objects. Real-life physical objects have affordances that are used in effective visual storytelling – and animation – that can be used well to make clear the consequences of actions. It’s more complicated than that, though, and it can go horribly wrong as well as right.

From Dan Hon at Extenuating Circumstances – The future is Movie OS.

The Films of Charles & Ray Eames

Posted on Mar 1, 2010 in Research

“While Charles & Ray were frequently contracted by corporations like Polaroid, Westinghouse, and IBM, they never made films on demand. Nearly all their films represent a symbiotic relationship between the artist and the client, and they only made films when there was genuine interest. Witness Westinghouse ABC (1965), which is essentially a montage of the Westinghouse product line (note that the Westinghouse logo was designed by Paul Rand). Even here there is a spirited interest in the subject. In the film, Charles & Ray focus on the technology and typography at a break-neck tempo and transform what would otherwise be an incredibly dry subject into something rich and lively. Also, in SX-70 (1972), intended as a promotional film for the newly released Polaroid SX-70 camera, the Eames’ take advantage of the opportunity to discuss optics, transistors and to display their own polaroid photographs.

A good overview via The Films of Charles & Ray Eames.

Things

Posted on Feb 2, 2010 in Research

Things I’ve noticed today:

Lovely new exploratory homepage at Thinglink.

There is clearly a very well curated user-base at SVPPLY creating a continuous navigation of want.

Related: Social networks for things, Thingd, Allconsuming.

“VOLUME 5a May 2009 Part one of Volume 5 explores the connections between the moving framed image and…”

Posted on Sep 22, 2009 in geography, mediation,, Research, video

VOLUME 5a

May 2009

Part one of Volume 5 explores the connections between the moving framed image and geography, offering author-created videos and movie clips to supplement textual materials.

VOLUME 5b

May 2009

Part two of Volume 5 engages a range of media from televisual and cinematic spaces to altporn’s Suicide Girls to the use of place in transnational news..

Aether: The Journal of Media Geography Upcoming Issues

Touch

Posted on Sep 4, 2006 in Mobility, Project, Research, Technology, Ubicomp

Augmented reality experiments

I’m really not a fan of the goggle/glasses/helmet variety of AR, where the user wears something in front of their eyes that superimposes 3D objects into the physical world. In my experience this has been slow, inaccurate, cumbersome, headache inducing, the worst of VR plus a lot more problems. But AR is really interesting when it’s just a screen and a video feed, it becomes somehow magical: to see the same space represented twice: once in front of you, and once on screen with magical objects. I can imagine this working really well on mobile phones: the phone screen as magic lens to secret things.

Hand drawing markers

On that afternoon we didn’t have a printer handy for making the AR marks, so we took to drafting them by hand, stencilling them off the screen with a pencil and inking them in. This hand-crafted process led to all sorts of interesting connections between the possibilities of craft and digital information.

AR nail decorations

We had lots of ideas about printing the markers on clothes, painting them on nails, glazing them into ceramics, etc. We confused ARtoolkit by drawing markers in perspective, and tried to get recursive objects by using screen based markers and video feedback.

Confusing ARtoolkit

Now as it turns out there is an entire research programme dedicated to looking at just this topic. Variable Environment is a research programme involving partners like ECAL and EPFL. The great thing is that they are blogging the entire exploratory (they call it ‘sketch’) phase and curating the results online. The work is multi-disciplinary and involves architects, visual designers, computer scientists, interaction designers, etc. Check out the simple AR ready products, sample applications and mixed reality tests with various patterns.

This seems to be part of a shift in the research community, to publishing ongoing and exploratory work online (championed by the likes of Nicolas Nova and Anne Galloway). Very inspirational.

The address book desk

Posted on Dec 16, 2005 in Interaction design, Project, Research, Ubicomp

Underneath the desk I have stuck a grid of RFID tags, and on the top surface, the same grid of post-it notes. With the standard Nokia Service Discovery application it is possible to call people, send pre-defined SMSes or load URLs by touching the phone to each post-it on the desk. On the post-its themselves I have hand-written the function, message and the recipient. This is somewhat like a cross between a phone-book, to-do list and temporary diary, with notes, scribbles and tea stains alongside names.

Initial ideas were to spraypaint or silkscreen some of the touch icons to the desk surface, and I may well do that at some point. But for quick prototyping it made sense to use address labels or post-it notes that can be stuck, re-positioned and layered with hand-written notes.

This is an initial step in thinking about the use of RFID and mobile phones, a way of thinking through making. In many ways it is proving to be more inconvenient than the small screen (particularly with the occasionally unreliable firmware on this particular cover, I can’t speak for the production version). But it has highlighted some really interesting issues.

First of all it has brought to the forefront the importance of implicit habits. Initially, it took a real effort to think about the action of using the table as an interface: I would reach for the phone and press names to make a call, instead of placing it on the desk. But for some functions, such as sending an SMS, it has become more habitual.

SMSes have become more like ‘pings’ when very little effort is made to send them. At the same time they are more physically tangible: I rest the phone in a certain position on the desk and wait for it to complete an action. The most useful functions have been “I’m here” or “I’m leaving” messages to close friends.

I have had to consider the ‘negative space’ where the mobile must rest without any action. This space has potential be used for context information; a corner of the table could make my phone silent, another corner could change my presence online. Here it would be interesting to refer to Jan Chipchase’s ideas around centres of gravity and points of reflection, it’s these points that could be most directly mapped to behaviour. I’m thinking about other objects and spaces that might be appropriate for this, and perhaps around the idea of thoughtless acts.

If this was a space without wireless internet, I could also imagine this working very well for URLs: quick access to google searches, local services or number lookups, which is usually very tricky on a small screen. Here it would be interesting to think about how the mobile is used in non-connected places, such as the traditional Norwegian Hytte [pdf].

This process also raised a larger issue around the move towards tangible or physical information, which also implies a move towards the social. As I was making the layout of my address book and associated functions, I realised that maybe these things shouldn’t be explicit, visible, social objects. The arrangement of people within the grid makes personal sense; the placement is a personal preference and maps in certain ways to frequency and type of contact. But I wonder how it appears to other people when this pattern is exposed. Will people be offended by my layout? What if I don’t include a rarely called contact? Are there numbers I want to keep secret, hidden behind acronyms in the ‘Names’ menu?

It will be interesting to see how this plays out and changes over time, particularly in the reaction of others. I’ll post more about the use of NFC in other personal contexts in the near future.

The making of…

The desk is made from 20 mm birch ply, surfaced in Linoleum. I stuck a single RFID to the underside, in the place that felt most natural. A 10 cm grid was worked out from that point, and the RFIDs were stuck in that grid, and the same worked out on top. If I were to re-build the desk with this project in mind, the tags should probably be layered close to the surface, between the ply and Linoleum. This would make them slightly more responsive to touch by giving them a larger read/write distance.

Rewriteable 512 bit, Philips MiFare UL stickers.

10 cm grid of tags on the underside of the desk.

Blank post-it notes on the surface, with the same grid.

More photos at Flickr.

Nokia 3220 with NFC

Posted on Dec 6, 2005 in Interaction design, Mobility, Research, Ubicomp, Usability

First impressions

Overall the interaction between phone and RFID tags has been good. The reader/writer is on the base of the phone, at the bottom. This seems a little awkward to use at first, but slowly becomes natural. When I have given it to others, their immediate reaction is to point the top of the phone to the tag, and nothing happens. There follows a few moments of explaining as the intricacies of RFID and looking at the phone, with it’s Nokia ‘fingerprint’ icon. As phones increasingly become replacements for ‘contactless cards’, it seems likely that this interaction will become more habitual and natural.

Once the ‘service discovery’ application is running, the read time from tags is really quick. The sharp vibrations and flashing lights add to a solid feeling of interacting with something, both in the haptic and visual senses. This should turn out to be a great platform for embodied interaction with information and function.

The ability to read and write to tags makes it potentially adaptive as a platform wider than just advertising or ticketing. As an interaction designer I feel quite enabled by this technology: the three basic functions (making phonecalls, going to URLs, or sending SMSs) are enough to start thinking about tangible interactions without having to go and program any Java midlets or server-side applications.

I’m really happy that Nokia is putting this technology into a ‘low-end’ phone rather than pushing it out in a ‘smartphone’ range. This is where there is potential for wider usage and mass-market applications, especially around gaming and content discovery.

Improvements

I had some problems launching the ‘service discovery’ application. Sometimes it works, sometimes it doesn’t and it’s difficult to tell why this is. It would be great to be able to place the phone on the table, knowing that it will respond to a tag, but it was just a little too unreliable to do that without checking to see that it had responded. The version I have still says it’s a prototype, so this may well be sorted out by the released version.

Overall there is a lack of integration between the service discovery application and the rest of the system: Contacts, SMS archive and service bookmarks etc. At the moment we need to enter the application to write and manage tags, or to give a ‘shortcut’ to another phone, but it seems that, as with bluetooth and IR, this should be part of the contextual menus that appear under ‘Options’ within each area of the phone. There are also some infuriating prompts that appear when interacting with URL, more details below.

Details

The phone opens the ‘service discovery’ application whenever it detects a compatible RFID tag near the base of the phone (when the keypad lock is off). This part is a bit obscure: sometimes it doesn’t ‘wake up’ for a tag, and the application needs to be loaded before it will read properly. Once the application is open (about 2-3 seconds) the read time of the tags seems instantaneous.

The shortcuts menu gives access to shortcuts. Confusingly, this is different from ‘bookmarks’ and the ‘names’ list on the phone, although names can be searched from within the application. I think tighter integration with the OS is called for.

Shortcuts can be added, edited, deleted, etc. in the same way as contacts. They can be ‘Given’ to another phone or ‘Written’ to a tag.

There are three kinds of shortcuts: Call, URL or SMS. ‘Call’ will create a call to a pre-defined number, ‘URL’ will load a pre-defined URL, and ‘SMS’ will send a pre-defined SMS to a particular number. This part of the application has the most room for innovative extensions: we should be able to set the state of the phone, change profiles, change themes, download graphics, etc. This can be achieved by loading URLs, but URLs and mobiles don’t mix, so why should we be presented with them, when there could be a more usable layer inbetween? There could also be preferences for prompts: at the moment each action has to be confirmed with a yes or a no, but in some secure environments it would be nice to be able to have a function launched without the extra button push.

If a tag contains no data, then we are notified and placed back on the main screen (as happened when I tried to write to my Oyster card).

If the tag is writeable we are asked which shortcut to write to the tag.

When we touch a tag with a shortcut on it, a prompt appears asking for confirmation. This is a level of UI to prevent mistakes, and a certain level of security, but it also reduces the overall usability of the system. With URL launching, there are two stages of confirmation, which is infuriating. There needs to be some other mode of confirmation, and the ‘service discovery’ app needs to somehow be deeper in the system to avoid these double button presses.

Lastly, there is a log of actions. Useful to see if the application has been reading something in your bag or wallet, without you knowing…

Embodied interaction in music

Posted on Apr 28, 2005 in Interaction design, Media, Mobility, Research, Sound, Usability

I too have ditched my large iPod for the iPod Shuffle, finding that I love the white-knuckle ride of random listening. But that doesn’t exclude the need for a better small-screen-based music experience.

The pseudo-analogue interface of the iPod clickwheel doesn’t cut it. It can be difficult to control when accessing huge alphabetically ordered lists, and the acceleration or inertia of the view can be really frustrating. The combinations of interactions: clicking into deeper lists, scrolling, clicking deeper, turn into long and tortuous experiences if you are engaged in any simultaneous activity. Plus its difficult to use through clothing, or with gloves.

Music and language

My first thought was something Jack and I discussed a long time ago, using a phone keypad to type the first few letters of a artist, album or genre and seeing the results in real-time, much like iTunes does on a desktop. I find myself using this a lot in iTunes rather than browsing lists.

Predictive text input would be very effective here, when limited to the dictionary of your own music library. (I wonder if QIX search would do this for a music library on a mobile?)

Maybe now is the time to look at this as we see mobile phone music convergence.

h3. Navigating through movement

Since scrolling is inevitable to some degree, even within fine search results, what about using simple movement or tilt to control the search results? One of the problems with using movement for input is context: when is movement intended? And when is movement the result of walking or a bump in the road?

One solution could be a “squeeze and shake” quasi-mode: squeezing the device puts it into a receptive state.

Another could be more reliance on the 3 axes of tilt, which are less sensitive to larger movements of walking or transport.

Gestures

I’m not sure about gestural interfaces, most of the prototypes I have seen are difficult to learn, and require a certain level of performativity that I’m not sure everyone wants to be doing in public space. But having accelerometers inside these devices should, and would, allow for the hacking together other personal, adaptive gestural interfaces that would perhaps access higher level functions of the device.

One gesture I think could be simple and effective would be covering the ear to switch tracks. To try this out we could add a light or capacitive touch sensor to each earbud.

With this I think we would have trouble with interference from other objects, like resting the head against a wall. But there’s something nicely personal and intimate about putting the hand next to the ear, as if to listen more intently.

More knobs

Things that are truly analogue, like volume and time, should be mapped to analogue controls. I think one of the greatest unexplored areas in digital music is real-time audio-scrubbing, currently not well supported on any device, probably because of technical constraints. But scrubbing through an entire album, with a directly mapped input, would be a great way of finding the track you wanted.

Research projects like the DJammer are starting to look at this, specifically for DJs. But since music is inherently time-based there is more work to be done here for everyday players and devices. Let’s skip the interaction design habits we’ve learnt from the CD era and go back to vinyl :)

Evolution of the display

Where displays are required, I hope we can be free of small, fuzzy, low-contrast LCDs. With new displays being printable on paper, textiles and other surfaces there’s the possibility of improving the usability, readability and “glanceability” of the display.

We are beginning to see signs of this with this OLED display on this Sony Network Walkman where the display is under the surface of the product material, without a separate “glass” area.

For the white surface of an iPod, the high-contrast, paper-like surfaces of technologies like e-ink would make great, highly readable displays.

Prototyping

So I really need to get prototyping with accelerometers and display technologies, to understand simple movement and gesture in navigating music libraries. There are other questions to answer: I’m wondering if using movement to scroll through search results would create the appearance of a large screen space, through the lens of a small screen. As with bumptunes, I think many more opportunities will emerge as we make these things.

More reading

Designing for Shuffling
Thoughts on the iPod Shuffle
Bumptunes
Audioclouds/gestural interaction
Sound objects
DJammer
On the body
Runster

Tangible and social interaction

Posted on Mar 18, 2005 in Interaction design, Research, Social, Technology, Ubicomp

Brief history of interaction

(Based on Dourish, see reading recommendations, below)

Each successive development in computer history has made greater use of human skills:

  • electrical: required a thorough understanding of electrical design
  • symbolic: required a thorough understanding of the manipulation of abstract languages
  • textual: text dialogue with the computer: set the standards of interaction we still we live with today
  • graphic: graphical dialogue with the computer, using our spatial skills, pattern recognition, and motion memory with a mouse and keyboard

    We have become stuck in this last model.

    Interaction with computers has remained largely the same: desk, screen, input devices, etc. Even entirely new fields like mobile and iTV have followed these interaction patterns.

    Definitions:

  • Tangible: physical: having substance or material existence; perceptible to the senses
  • Social: human and collaborative abilities, or ‘software that’s better because there’s people there’ (Definition from Matt Jones and Matt Webb)

    Examples

    Dourish notes in the first few chapters of his book that as interaction with computers moves out into the world, it becomes part of our social world too. The social and the tangible are intricately linked as part of “being in the world”.

    What follows are examples of products or services we can use or buy right now. I’m specifically interested in the ways that these theories of ubiquitous computing and tangible interaction are moving out into the world, and the way that we can see the trends in currently available products.

    I’m aware that there are also terrifically interesting things happening in research (eg the Tangible Media Group) but right now I’m interested in the emergent things that start to happen effects of millions of people using things (like Flickr, weblogs, Nintendo DS, and mobile social software).

    Social trends on the web

    On the web the current trend is building simple platforms that support complex social/human behaviour

  • Weblogs, newsreaders and RSS: simple platform that has changed the way the web works, and supported simple social interaction (the basic building blocks of dialogue, or conversation)
  • Flickr: a simple platform for media/photo sharing: turned into a thriving community: works well with the web by allowing syndicated photos, bases the social network on top of a defined funciton
  • Others include del.icio.us, world of warcraft, etc.

    Social mobile computing

    On mobile platforms most of the exciting stuff is happening around presence, context and location

  • Familiar strangers: stores a list of all the phones that you have been near in places that you inhabit, and then visualises the space around you according to who you have met before. More mobile social software
  • Mogi: location based game, but most interestingly supports different contexts of use: both at home in front of a big screen, and out on a small mobile screen.

    Social games

    Interesting that games are moving away from pure immersive 3D worlds, and starting to devote equal attention to their situated, social context

  • Nintendo DS: PictoChat, local wireless networks that can be adapted for gameplay or communication (picture chatting included as standard)
  • Sissyfight: very simple social game structure, encourages human behaviour, insults
  • Habbohotel: simple interaction structures, (and fantastic attention to detail in iconic representations) support human desires. Now a very large company, in over 12 countries, based on the sales of virtual furniture
  • Singstar: entirely social game, about breaking social barriers and mutual humiliation: realtime analysis/visualisation of your voice actually makes you sing worse!

    Tangible games

  • Eyetoy: Brings the viewer into the screen, creates a performative and social space, and allows communication via PS2
  • Dance Dance Revolution: taking the television into physical space
  • Nokia wave-messaging: puts information back into space, and creates social and performative opportunities (Photo thanks to Matt Webb)

Sound objects

Posted on Feb 25, 2005 in Interaction design, Research, Sound, Technology, Ubicomp

These are some of my notes from Mikael Fernström’s lecture at AHO.

The aim of the Soundobject research is to liberate interaction design from visual dominance, to free up our eyes, and to do what small displays don’t do well.

Reasons for focusing on sound:

  • Sound is currently under-utilised in interaction design
  • Vision is overloaded and our auditory senses are seldom engaged
  • In the world we are used to hearing a lot
  • Adding sound to existing, optimised visual interfaces does not add much to usability

    Sound is very good at attracting our attention, so we have alarms and notification systems that successfully use sound in communication and interaction. We talked about using ‘caller groups’ on mobile phones where people in an address book can be assigned different ringtones, and how effective it was in changing our relationship with our phones. In fact it’s possible to sleep through unimportant calls: our brains are processing and evaluating sound while we sleep.

    One fascinating thing that I hadn’t considered is that sound is our fastest sense: it has an extremely high temporal resolution (ten times faster than vision), so for instance our ears can hear pulses at a much higher rate than our eyes can watch a flashing light.

    Disadvantages of sound objects

    Sound is not good for continuous representation because we cannot shut out sound in the way we can divert our visual attention. It’s also not good for absolute display: pitch, loudness and timbre are relative to most people, even people that have absolute pitch can be affected by contextual sounds. And context is a big issue: loud or quiet environments affect the way that sound must be used in interfaces: libraries and airplanes for example.

    There are also big problems with spatial representation in sound, techniques that mimic the position of sound based on binaural differences are inaccessible by about a fifth of the population. This perception of space in sound is also intricately linked with the position and movement of the head. Some Google searches on spatial representation of sound. See also Psychophysical Scaling of Sonification Mappings [pdf]

    Cartoonification

    ‘Filling a bottle with water’ is a sound that could work as part of an interface, representing actions such as downloading, uploading or in replacement of progress bars. The sound can be abstracted into a ‘cartoonification’ that works more effectively: the abstraction separates simulated sounds from everyday sounds.

    Mikael cites inspiration from foley artists working on film sound design, that are experienced in emphasising and simplifying sound actions, and in creating dynamic sound environments, especially in animation.

    A side effect of this ‘cartoonification’ is that sounds can be generated in simpler ways: reducing processing and memory overhead in mobile devices. In fact all of the soundobject experiments rely on parametric sound synthesis using PureData: generated on the fly rather than using sampled sound files, resulting in small, fast, adaptive interface environments (sound files and the PD files used to generate the sounds can be found at the Soundobject site).

    One exciting and pragmatic idea that Mikael mentioned was simulating ‘peas in a tin’ to hear how much battery is left in a mobile device. Something that seems quite possible, reduced to mere software, with the accelerometer in the Nokia 3220. Imagine one ‘pea’ rattling about, instead of one ‘bar’ on a visual display…

    Research conclusions

    The most advanced prototype of a working sound interface was a box that responded to touch, and had invisible soft-buttons on it’s surface that could only be heard through sound. The synthesised sounds responded to the movement of the fingertips across a large touchpad like device (I think it was a tactex device). These soft-buttons used a simplified sound model that synthesised impact, friction and deformation. See Human-Computer Interaction Design based on Interactive Sonification [pdf]

    The testing involved asking users to feel and hear their way around a number of different patterns of soft-buttons, and to draw the objects they found. See these slides for some of the results.

    The conclusions were that users were almost as good at using sound interfaces as with normal soft-button interfaces and that auditory displays are certainly a viable option for ubiquitous, especially wearable, computing.

    More reading

    Soundobject
    Gesture Controlled Audio Systems
    ICAD

Spatial memory at Design Engaged 2004

Notes on two related projects:

1. Time that land forgot

  • A project in collaboration with Even Westvang
  • Made in 10 days at the Icelandic locative media workshop, summer 2004
  • Had the intention of making photo archives and gps trails more useful/expressive
  • Looked at patterns in my photography: 5 months, 8000 photos, visualised them by date / time of day. Fantastic resource for me: late night parties, early morning flights, holidays and the effect of midnight sun is visible.
  • time visualisation

    2. Marking in urban public space

    I’ve also been mapping stickering, stencilling and flyposting: walking around with the camera+gps and photographing examples of marking (not painted graffiti).

    This research looks at the marking of public space by investigating the physical annotation of the city: stickering, stencilling, tagging and flyposting. It attempts to find patterns in this marking practice, looking at visibility, techniques, process, location, content and audience. It proposes ways in which this marking could be a layer between the physical city and digital spatial annotation.

    Some attributes of sticker design

  • Visibility: contrast, monochromatic, patterns, bold shapes, repetition
  • Patina: history, time, decay, degredation, relevance, filtering, social effects
  • Physicality: residue of physical objects: interesting because these could easily contain digital info
  • Adaptation and layout: layout is usually respectful, innovative use of dtp and photocopiers, adaptive use of sticker patina to make new messages on top of old

    Layers of information build on top of each other, as with graffiti, stickers show their age through fading and patina, flyposters become unstuck, torn and covered in fresh material. Viewed from a distance the patina is evident, new work tends to respect old, and even commercial flyposting respects existing graffiti work.

    Techniques vary from strapping zip-ties through cardboard and around lampposts for large posters, to simple hand-written notes stapled to trees, and short-run printed stickers. One of the most fascinating and interactive techniques is the poster offering strips of tear-off information. These are widely used, even in remote areas.

    Initial findings show that stickers don’t relate to local space, that they are less about specific locations than about finding popular locations, “cool neighbourhoods” or just ensuring repeat exposure. This is opposite to my expectations, and perhaps sheds some light on current success/failure of spatial annotation projects.

    I am particularly interested in the urban environment as an interface to information and an interaction layer for functionality, using our spatial and navigational senses to access local and situated information.

    There is concern that in a dense spatially annotated city we might have an overload of information, what about filtering and fore-grounding of relevant, important information? Given that current technologies have very short ranges (10-30mm), we might be able to use our existing spatial skills to navigate overlapping information. We could shift some of the burden of information retrieval from information architecture to physical space.

    I finished by showing this animation by Kriss Salmanis, a young Latvian artist. Amazing re-mediation of urban space through stencilling, animation and photography. (“Un ar reizi naks tas bridis” roughly translates as “And in time the moment will come”.

    Footnotes/references

    Graffiti Archaeology, Cassidy Curtis
    otherthings.com/grafarc

    Street Memes, collaborative project
    streetmemes.com

    Spatial annotation projects list
    elasticspace.com/2004/06/spatial-annotation

    Nokia RFID kit for 5140
    nokia.com/nokia/0,,55739,00.html

    Spotcodes, High Energy Magic
    highenergymagic.com/spotcode

    ?Mystery Meat navigation?, Vincent Flanders
    fixingyourwebsite.com/mysterymeat.html

    RDF as barcodes, Chris Heathcote
    undergroundlondon.com/antimega/archives/2004_02.html

    Implementation: spatial literature
    nickm.com/implementation

    Yellow Arrow
    yellowarrow.org

Load More