Satellite Lamps is a project that reveals one of the most significant contemporary technology infrastructures, the Global Positioning System (GPS). Made with Einar Sneve Martinussen and Jørn Knutsen as part of the Yourban research project at AHO, it continues our project of revealing the materials of technologies that started in 2009 with RFID and WiFi.
GPS is widely used yet it’s invisible and few of us really have any idea of how it works or how it inhabits our everyday environments. We wanted to explore the cultural and material realities of GPS technology, and to develop new understandings about it through design.
“Satellite Lamps shows that GPS is not a seamless blanket of efficient positioning technology; it is a negotiation between radio waves, earth-orbit geometry and the urban environment. GPS is a truly impressive technology, but it also has inherent seams and edges.”
We created a series of lamps that change brightness according to the accuracy of received GPS signals, and when we photograph them as timelapse films, we start to build a picture of how these signals behave in actual urban spaces.
We’ve made a film that you can watch here, and published an extensive article that details very thoroughly how it was made and why. You can read more on how we explored GPS technology, how the visualisations were made, and about the popular cultural history of GPS.
My PhD thesis called ‘Making Visible’ was submitted in December 2013 and successfully defended on 12 June 2014. The thesis reflects upon the material exploration research from the Touch and Yourban projects. It uses these explorations to situate design research with technology as a cultural, material and mediational practice:
In Making Visible I outline how interaction design may engage in the material and mediation of new interface technologies. Drawing upon a design project called Touch, that investigated an emerging interface technology called Radio Frequency Identification or RFID, I show how interaction design research can explore technology through material and mediational approaches. I demonstrate and analyse how this research addresses the inter-related issues of invisibility, seamlessness and materiality that have become central issues in the design of contemporary interfaces. These issues are analysed and developed through three intertwined approaches of research by design: 1. a socio- and techno-cultural approach to understanding emerging technologies, 2. through material exploration and 3. through communication and mediation. When taken together these approaches form a communicative mode of interaction design research that engages directly with the exploration, understanding and discussion of emerging interface technologies.
The thesis is made up of four peer-reviewed journal articles accompanied by a ‘meta-reflection’ document that reflects upon and situates these publications through theory, concepts and models.
This document develops the concept of mediational material that focuses attention on the material and communicative practices in interaction design. These are used to explore, develop and share knowledge of technologies as design materials.
This document is 178 pages with 53 illustrations. I’ve made it available in a number of different digital formats:
It will also be available through AHO’s open-access archive ADORA.
The four included articles have been published in peer-reviewed journals.
Article 1: Exploring ‘immaterials’: mediating design’s invisible materials
This article takes up the issues of so-called ‘immaterial’ and ‘seamless’ technologies and asks how designers might explore them in order to consider them as design materials. It situates interaction design as a sociocultural practice that is concerned with culture, critical approaches and with engaging the technocultural imagination. It concludes by analysing its mediational strategies, such as the use of documentary formats, online film and weblog writing, and the ways in which new material perspectives have been shared, discussed and developed by others.
Article 2: Visualizations of digital interaction in daily life
This article explores how visual signage may make aspects of ubiquitous computing technologies visible and how digital tools and platforms impact that visual design and semiosis. It looks at a range of ‘identification’ technologies such as barcodes and rfid, that only become ‘visible’ or ‘interactional’ through a designer’s intervention in physical or visual expression. It finds that designers should emphasize the bindings and distinctions between design processes and visual mediations, and symbols and signs, in engaging with emerging technologies as material for creative and communicative composition.
Article 3: Satellite Lamps
Satellite lamps is a project about using design to investigate and reveal one of the fundamental constructs of the networked city: GPS – the Global Positioning System. It extends the concepts of ‘mediational materials’ to an investigation of the ways in which GPS technology inhabits urban spaces. The article takes up how a discursive and reflexive interaction design practice can contribute to new perspectives on networked city life. Importantly, Satellite Lamps is a multimediational web text, involving different media (film, media, notebooks and a host of images) allowing for the richness of the work to come to the surface in a way that would not have been possible in traditional means of academic publishing.
Article 4: Depth of Field: Discursive design research through film
This article is about the role of film in interaction and product design research with technology, and the use of film in exploring and explaining emerging technologies in multiple contexts. It concludes by looking towards the potentials for a discursive design practice, where the object of design and analysis is the discourse that is catalysed by new artefacts, and the emphasis of design research is on communication.
Internet machine is a multi-screen film about the invisible infrastructures of the internet. The film reveals the hidden materiality of our data by exploring some of the machines through which ‘the cloud’ is transmitted and transformed.
Internet machine (showing now at Big Bang Data or watch the trailer) documents one of the largest, most secure and ‘fault-tolerant’ data-centres in the world, run by Telefonica in Alcalá, Spain. The film explores these hidden architectures with a wide, slowly moving camera. The subtle changes in perspective encourage contemplative reflection on the spaces where internet data and connectivity are being managed.
In this film I wanted to look beyond the childish myth of ‘the cloud’, to investigate what the infrastructures of the internet actually look like. It felt important to be able to see and hear the energy that goes into powering these machines, and the associated systems for securing, cooling and maintaining them.
What we find, after being led through layers of identification and security far higher than any airport, are deafeningly noisy rooms cocooning racks of servers and routers. In these spaces you are buffeted by hot and cold air that blusters through everything.
Server rooms are kept cool through quiet, airy ‘plenary’ corridors that divide the overall space. There are fibre optic connections routed through multiple, redundant, paths across the building. In the labyrinthine corridors of the basement, these cables connect to the wider internet through holes in rough concrete walls.
Power is supplied not only through the mains, but backed up with warm caverns of lead batteries, managed by gently buzzing cabinets of relays and switches.
These are backed up in turn by rows of yellow generators, supplied by diesel storage tanks and contracts with fuel supply companies so that the data centre can run indefinitely until power returns.
The outside of the building is a facade of enormous stainless steel water tanks, containing tens of thousands of litres of cool water, sitting there in case of fire.
And up on the roof, to the sound of birdsong, is a football-pitch sized array of shiny aluminium ‘chillers’ that filter and cool the air going into the building.
In experiencing these machines at work, we start to understand that the internet is not a weightless, immaterial, invisible cloud, and instead to appreciate it as a very distinct physical, architectural and material system.
This was a particularly exciting project, a chance for an ambitious and experimental location shoot in a complex environment. Telefónica were particularly accommodating and allowed unprecedented access to shoot across the entire building, not just in the ‘spectacular’ server rooms. Thirty two locations were shot inside the data centre over the course of two days, followed by five weeks of post-production.
I had to invent some new production methods to create a three-screen installation, based on some techniques I developed over ten years ago. The film was shot using both video and stills, using a panoramic head and a Canon 5D mkIII. The video was shot using the Magic Lantern RAW module on the 5D, while the RAW stills were processed in Lightroom and stitched together using Photoshop and Hugin.
The footage was then converted into 3D scenes using camera calibration techniques, so that entirely new camera movements could be created with a virtual three-camera rig. The final multi-screen installation is played out in 4K projected across three screens.
There are more photos available at Flickr.
Internet Machine is produced by Timo Arnall, Centre de Cultura Contemporània de Barcelona – CCCB, and Fundación Telefónica. Thanks to José Luis de Vicente, Olga Subiros, Cira Pérez and María Paula Baylac.
The Immaterials project is concerned with the increasing invisibility of interfaces and infrastructures. The systems we interact with everyday such as WiFi and 3G networks have a profound impact on how we experience the world. As Adam Greenfield says:
the complex technologies the networked city relies upon to produce its effects remain distressingly opaque, even to those exposed to them on a daily basis. […] it’s hard to be appropriately critical and to make sound choices in a world where we don’t understand the objects around us.
And as James Bridle has eloquently and disturbingly observed:
Those who cannot perceive the network cannot act effectively within it, and are powerless.
The project set out to expose some of the phenomena and mechanisms of technological infrastructures through visual, photographic, narrative, animated and cinematic techniques. Over the last five years I have worked with Einar Sneve Martinussen, Jørn Knutsen, Jack Schulze and Matt Jones towards a body of work that is now brought together in an exhibition for the first time.
From 2004–2008 I speculated about the ways in which wireless interactions inhabited physical space, through my work on a Graphic language for touch, and also through films such as Wireless in the world. Some of my students made beautiful but fictional speculations about the physical qualities of different kinds of radio.
Jack Schulze and I also made a short, playful film called Nearness about action at a distance. In the film, a series of simple reactions are set off by immaterial phenomena, such as radio waves, mobile networks, light, magnetism and wind.
In 2006 we ran a Touch workshop with BERG where we became concerned about the invisibility of RFID technology, and the effect that had on our ability to design with it. We found it extraordinary that a technology that was defined as a proximity or ‘touch’-based interface, was so opaque in terms of its physical, spatial, gestural materiality. How do we as designers make these materials visible, so we can have reflective conversations with them?
We developed Experiments in Field Drawing as a method of revealing, literally drawing, the physical presence of RFID interactions. We revealed these fields in a much richer, multi-dimensional way using photography, animation and light painting in the film Immaterials: Ghost in the Field.
Matt Jones coined the term immaterials to describe the project and gave a great talk about some ways of understanding the immaterials of interaction design. Matt and I also looked at machine vision, another phenomena that increasingly becomes a material for design in Robot Readable World.
In 2011 at AHO, as part of a research project called Yourban, we extended the investigations to WiFi, using similar light painting techniques we revealed the enormous scale and pervasiveness of ad-hoc WiFi networks in urban spaces in Immaterials: Light Painting WiFi.
Finally, over the last two years we’ve become increasingly interested in the Global Positioning System (GPS), that has become a central part of both the vision and the implementation of contemporary interfaces.
We have built a series of Satellite Lamps that sense the presence of the 24 GPS satellites in orbit. The lamps change brightness according to the strength of GPS signals they receive, showing how the technology itself is messy and unpredictable, and revealing how GPS is a negotiation between radio waves, earth-orbit geometry and the urban environment.
The visual languages that we’ve developed have ended up in advertising, on the BBC and Discovery Channel, and the techniques have been extended in research at MIT and CIID, and by many designers, enthusiasts and hackers. It’s exciting that both the subject and the methods are being taken up and used broadly by other people, and we’re looking forward to seeing more.
the truly pressing need is for translators: people capable of opening these occult systems up, demystifying them, explaining their implications to the people whose neighborhoods and choices and very lives are increasingly conditioned by them. — Adam Greenfield (2009)
The Immaterials project emerged from the humble preoccupations of a few designers dealing with some of the invisible, immaterial, intangible stuff we had in front of us. These small experiments led to larger and more visually and narratively communicative work. In the end what I think we’ve developed is an approach to technology that revolves around material exploration, explanation and communication. Because images and language, as well as materials, form our understandings of technology, Immaterials has shown how we can use ‘design and playful explorations to shape or stir the popular imagination’.
All the Immaterials projects are on display at Lighthouse in Brighton from 5 September until 13 October 2013.
Satellite Lamps and Robot Readable World are on display at Dread in Amsterdam from 7 September until 24 November 2013.
This is a short film, an experiment in machine-vision footage. It uses found-footage from computer vision research to explore how machines are making sense of the world.
As robots begin to inhabit the world alongside us, how do they see and gather meaning from our streets, cities, media and from us? Machines have a tiny, fractional view of our environment, that sometimes echoes our own human vision and often doesn’t.
Still from the film xXx from Mark Coleran‘s portfolio.
The idea that Apple is grasping at real-life objects because they support effective visual storytelling is very interesting:
In Movie OS, visual storytelling is used to make the system’s important, critical reaction to a user’s action abundantly clear. In Movie OS, you know if you’re logging into Facebook.
I’d argue that visual storytelling doesn’t exist – if it does, it hardly exists at all – in computer or consumer eletronics user interfaces. The entire palette of visual storytelling in terms of interface, through accident of history, is purely engineering and control-led.
This is where, I’d say, Apple is grasping when it says that interfaces should sometimes look toward real-life objects. Real-life physical objects have affordances that are used in effective visual storytelling – and animation – that can be used well to make clear the consequences of actions. It’s more complicated than that, though, and it can go horribly wrong as well as right.
From Dan Hon at Extenuating Circumstances – The future is Movie OS.
“While Charles & Ray were frequently contracted by corporations like Polaroid, Westinghouse, and IBM, they never made films on demand. Nearly all their films represent a symbiotic relationship between the artist and the client, and they only made films when there was genuine interest. Witness Westinghouse ABC (1965), which is essentially a montage of the Westinghouse product line (note that the Westinghouse logo was designed by Paul Rand). Even here there is a spirited interest in the subject. In the film, Charles & Ray focus on the technology and typography at a break-neck tempo and transform what would otherwise be an incredibly dry subject into something rich and lively. Also, in SX-70 (1972), intended as a promotional film for the newly released Polaroid SX-70 camera, the Eames’ take advantage of the opportunity to discuss optics, transistors and to display their own polaroid photographs.
A good overview via The Films of Charles & Ray Eames.
“VOLUME 5a May 2009 Part one of Volume 5 explores the connections between the moving framed image and…”
Part one of Volume 5 explores the connections between the moving framed image and geography, offering author-created videos and movie clips to supplement textual materials.
Part two of Volume 5 engages a range of media from televisual and cinematic spaces to altporn’s Suicide Girls to the use of place in transnational news..
I’m really not a fan of the goggle/glasses/helmet variety of AR, where the user wears something in front of their eyes that superimposes 3D objects into the physical world. In my experience this has been slow, inaccurate, cumbersome, headache inducing, the worst of VR plus a lot more problems. But AR is really interesting when it’s just a screen and a video feed, it becomes somehow magical: to see the same space represented twice: once in front of you, and once on screen with magical objects. I can imagine this working really well on mobile phones: the phone screen as magic lens to secret things.
On that afternoon we didn’t have a printer handy for making the AR marks, so we took to drafting them by hand, stencilling them off the screen with a pencil and inking them in. This hand-crafted process led to all sorts of interesting connections between the possibilities of craft and digital information.
We had lots of ideas about printing the markers on clothes, painting them on nails, glazing them into ceramics, etc. We confused ARtoolkit by drawing markers in perspective, and tried to get recursive objects by using screen based markers and video feedback.
Now as it turns out there is an entire research programme dedicated to looking at just this topic. “Variable Environment”:http://sketchblog.ecal.ch/variable_environment/ is a research programme involving partners like “ECAL”:http://www.ecal.ch/pages/home_new.asp and “EPFL”:http://www.epfl.ch. The great thing is that they are blogging the entire exploratory (they call it ‘sketch’) phase and curating the results online. The work is multi-disciplinary and involves architects, visual designers, computer scientists, interaction designers, etc. Check out the simple “AR ready products”:http://sketchblog.ecal.ch/variable_environment/archives/2006/07/ar_ready_simple.html, “sample applications”:http://sketchblog.ecal.ch/variable_environment/archives/2006/07/applications_1.html and “mixed reality tests”:http://sketchblog.ecal.ch/variable_environment/archives/2006/01/mixed_reality_t_1.html with “various patterns”:http://sketchblog.ecal.ch/variable_environment/archives/2006/03/test_01_pattern.html.
In order to return emotional stability, it is enough to take a pill of Ativan 1mg .
This seems to be part of a shift in the research community, to publishing ongoing and exploratory work online (championed by the likes of “Nicolas Nova”:http://tecfa.unige.ch/perso/staf/nova/blog/ and “Anne Galloway”:http://www.purselipsquarejaw.org/). Very inspirational.
Underneath the desk I have stuck a grid of RFID tags, and on the top surface, the same grid of post-it notes. With the standard Nokia Service Discovery application it is possible to call people, send pre-defined SMSes or load URLs by touching the phone to each post-it on the desk. On the post-its themselves I have hand-written the function, message and the recipient. This is somewhat like a cross between a phone-book, to-do list and temporary diary, with notes, scribbles and tea stains alongside names.
Initial ideas were to spraypaint or silkscreen some of the touch icons to the desk surface, and I may well do that at some point. But for quick prototyping it made sense to use address labels or post-it notes that can be stuck, re-positioned and layered with hand-written notes.
This is an initial step in thinking about the use of RFID and mobile phones, a way of thinking through making. In many ways it is proving to be more inconvenient than the small screen (particularly with the occasionally unreliable firmware on this particular cover, I can’t speak for the production version). But it has highlighted some really interesting issues.
First of all it has brought to the forefront the importance of implicit habits. Initially, it took a real effort to think about the action of using the table as an interface: I would reach for the phone and press names to make a call, instead of placing it on the desk. But for some functions, such as sending an SMS, it has become more habitual.
SMSes have become more like ‘pings’ when very little effort is made to send them. At the same time they are more physically tangible: I rest the phone in a certain position on the desk and wait for it to complete an action. The most useful functions have been “I’m here” or “I’m leaving” messages to close friends.
I have had to consider the ‘negative space’ where the mobile must rest without any action. This space has potential be used for context information; a corner of the table could make my phone silent, another corner could change my presence online. Here it would be interesting to refer to Jan Chipchase’s ideas around centres of gravity and points of reflection, it’s these points that could be most directly mapped to behaviour. I’m thinking about other objects and spaces that might be appropriate for this, and perhaps around the idea of thoughtless acts.
If this was a space without wireless internet, I could also imagine this working very well for URLs: quick access to google searches, local services or number lookups, which is usually very tricky on a small screen. Here it would be interesting to think about how the mobile is used in non-connected places, such as the traditional Norwegian Hytte [pdf].
This process also raised a larger issue around the move towards tangible or physical information, which also implies a move towards the social. As I was making the layout of my address book and associated functions, I realised that maybe these things shouldn’t be explicit, visible, social objects. The arrangement of people within the grid makes personal sense; the placement is a personal preference and maps in certain ways to frequency and type of contact. But I wonder how it appears to other people when this pattern is exposed. Will people be offended by my layout? What if I don’t include a rarely called contact? Are there numbers I want to keep secret, hidden behind acronyms in the ‘Names’ menu?
It will be interesting to see how this plays out and changes over time, particularly in the reaction of others. I’ll post more about the use of NFC in other personal contexts in the near future.
h3. The making of…
The desk is made from 20 mm birch ply, surfaced in Linoleum. I stuck a single RFID to the underside, in the place that felt most natural. A 10 cm grid was worked out from that point, and the RFIDs were stuck in that grid, and the same worked out on top. If I were to re-build the desk with this project in mind, the tags should probably be layered close to the surface, between the ply and Linoleum. This would make them slightly more responsive to touch by giving them a larger read/write distance.
Ativan 1mg has a wide pharmacological action but it is often used to arrest the symptoms of anxiety and acute psychoemotional disorders in the medical practice.
p(caption). Rewriteable 512 bit, Philips MiFare UL stickers.
p(caption). 10 cm grid of tags on the underside of the desk.
p(caption). Blank post-it notes on the surface, with the same grid.
h3. First impressions
Overall the interaction between phone and RFID tags has been good. The reader/writer is on the base of the phone, at the bottom. This seems a little awkward to use at first, but slowly becomes natural. When I have given it to others, their immediate reaction is to point the top of the phone to the tag, and nothing happens. There follows a few moments of explaining as the intricacies of RFID and looking at the phone, with it’s Nokia ‘fingerprint’ icon. As phones increasingly become replacements for ‘contactless cards’, it seems likely that this interaction will become more habitual and natural.
Once the ‘service discovery’ application is running, the read time from tags is really quick. The sharp vibrations and flashing lights add to a solid feeling of interacting with something, both in the haptic and visual senses. This should turn out to be a great platform for embodied interaction with information and function.
The ability to read and write to tags makes it potentially adaptive as a platform wider than just advertising or ticketing. As an interaction designer I feel quite enabled by this technology: the three basic functions (making phonecalls, going to URLs, or sending SMSs) are enough to start thinking about tangible interactions without having to go and program any Java midlets or server-side applications.
I’m really happy that Nokia is putting this technology into a ‘low-end’ phone rather than pushing it out in a ‘smartphone’ range. This is where there is potential for wider usage and mass-market applications, especially around gaming and content discovery.
I had some problems launching the ‘service discovery’ application. Sometimes it works, sometimes it doesn’t and it’s difficult to tell why this is. It would be great to be able to place the phone on the table, knowing that it will respond to a tag, but it was just a little too unreliable to do that without checking to see that it had responded. The version I have still says it’s a prototype, so this may well be sorted out by the released version.
Overall there is a lack of integration between the service discovery application and the rest of the system: Contacts, SMS archive and service bookmarks etc. At the moment we need to enter the application to write and manage tags, or to give a ‘shortcut’ to another phone, but it seems that, as with bluetooth and IR, this should be part of the contextual menus that appear under ‘Options’ within each area of the phone. There are also some infuriating prompts that appear when interacting with URL, more details below.
p(caption). The phone opens the ‘service discovery’ application whenever it detects a compatible RFID tag near the base of the phone (when the keypad lock is off). This part is a bit obscure: sometimes it doesn’t ‘wake up’ for a tag, and the application needs to be loaded before it will read properly. Once the application is open (about 2-3 seconds) the read time of the tags seems instantaneous.
p(caption). The shortcuts menu gives access to shortcuts. Confusingly, this is different from ‘bookmarks’ and the ‘names’ list on the phone, although names can be searched from within the application. I think tighter integration with the OS is called for.
p(caption). Shortcuts can be added, edited, deleted, etc. in the same way as contacts. They can be ‘Given’ to another phone or ‘Written’ to a tag.
p(caption). There are three kinds of shortcuts: Call, URL or SMS. ‘Call’ will create a call to a pre-defined number, ‘URL’ will load a pre-defined URL, and ‘SMS’ will send a pre-defined SMS to a particular number. This part of the application has the most room for innovative extensions: we should be able to set the state of the phone, change profiles, change themes, download graphics, etc. This can be achieved by loading URLs, but URLs and mobiles don’t mix, so why should we be presented with them, when there could be a more usable layer inbetween? There could also be preferences for prompts: at the moment each action has to be confirmed with a yes or a no, but in some secure environments it would be nice to be able to have a function launched without the extra button push.
p(caption). If a tag contains no data, then we are notified and placed back on the main screen (as happened when I tried to write to my Oyster card).
p(caption). If the tag is writeable we are asked which shortcut to write to the tag.
p(caption). When we touch a tag with a shortcut on it, a prompt appears asking for confirmation. This is a level of UI to prevent mistakes, and a certain level of security, but it also reduces the overall usability of the system. With URL launching, there are two stages of confirmation, which is infuriating. There needs to be some other mode of confirmation, and the ‘service discovery’ app needs to somehow be deeper in the system to avoid these double button presses.
But I guess it works differently for each person. Ativan Online provides me with effective and high-quality med that I will use further.
p(caption). Lastly, there is a log of actions. Useful to see if the application has been reading something in your bag or wallet, without you knowing…
I too have “ditched”:http://interconnected.org/home/2005/04/12/my_40gb_ipod_has my large iPod for the “iPod Shuffle”:http://www.apple.com/ipodshuffle/, finding that “I love the white-knuckle ride of random listening”:http://www.cityofsound.com/blog/2005/01/the_rise_and_ri.html. But that doesn’t exclude the need for a better small-screen-based music experience.
The pseudo-analogue interface of the iPod clickwheel doesn’t cut it. It can be difficult to control when accessing huge alphabetically ordered lists, and the acceleration or inertia of the view can be really frustrating. The combinations of interactions: clicking into deeper lists, scrolling, clicking deeper, turn into long and tortuous experiences if you are engaged in any simultaneous activity. Plus its difficult to use through clothing, or with gloves.
h3. Music and language
My first thought was something “Jack”:http://www.jackschulze.co.uk and I discussed a long time ago, using a phone keypad to type the first few letters of a artist, album or genre and seeing the results in real-time, much like “iTunes”:http://www.apple.com/itunes/jukebox.html does on a desktop. I find myself using this a lot in iTunes rather than browsing lists.
“Predictive text input”:http://www.t9.com/ would be very effective here, when limited to the dictionary of your own music library. (I wonder if “QIX search”:http://www.christianlindholm.com/christianlindholm/2005/02/qix_from_zi_cor.html would do this for a music library on a mobile?)
Maybe now is the time to look at this as we see “mobile”:http://www.sonyericsson.com/spg.jsp?cc=gb&lc=en&ver=4000&template=pp1_loader&php=php1_10245&zone=pp&lm=pp1&pid=10245 “phone”:http://www.nokia.com/n91/ “music convergence”:http://www.engadget.com/entry/1234000540040867/.
h3. Navigating through movement
Since scrolling is inevitable to some degree, even within fine search results, what about using simple movement or tilt to control the search results? One of the problems with using movement for input is context: when is movement intended? And when is movement the result of walking or a bump in the road?
One solution could be a “squeeze and shake” quasi-mode: squeezing the device puts it into a receptive state.
Another could be more reliance on the 3 axes of tilt, which are less sensitive to larger movements of walking or transport.
I’m not sure about gestural interfaces, most of the prototypes I have seen are difficult to learn, and require a certain level of performativity that I’m not sure everyone wants to be doing in public space. But having accelerometers inside these devices should, and would, allow for the hacking together other personal, adaptive gestural interfaces that would perhaps access higher level functions of the device.
One gesture I think could be simple and effective would be covering the ear to switch tracks. To try this out we could add a light or capacitive touch sensor to each earbud.
With this I think we would have trouble with interference from other objects, like resting the head against a wall. But there’s something nicely personal and intimate about putting the hand next to the ear, as if to listen more intently.
h3. More knobs
Things that are truly analogue, like volume and time, should be mapped to analogue controls. I think one of the greatest unexplored areas in digital music is real-time audio-scrubbing, currently not well supported on any device, probably because of technical constraints. But scrubbing through an entire album, with a directly mapped input, would be a great way of finding the track you wanted.
Research projects like the “DJammer”:http://www.hpl.hp.com/research/mmsl/projects/djammer/ are starting to look at this, specifically for DJs. But since music is inherently time-based there is more work to be done here for everyday players and devices. Let’s skip the interaction design habits we’ve learnt from the CD era and go back to vinyl :)
h3. Evolution of the display
Where displays are required, I hope we can be free of small, fuzzy, low-contrast LCDs. With new displays being printable on paper, textiles and other surfaces there’s the possibility of improving the usability, readability and “glanceability” of the display.
We are beginning to see signs of this with this OLED display on this “Sony Network Walkman”:http://dapreview.net/comment.php?comment.news.1086 where the display is under the surface of the product material, without a separate “glass” area.
For the white surface of an iPod, the high-contrast, “paper-like surfaces”:http://www.polymervision.com/New-Center/Downloads/Index.html of technologies like e-ink would make great, highly readable displays.
So I really need to get prototyping with accelerometers and display technologies, to understand simple movement and gesture in navigating music libraries. There are other questions to answer: I’m wondering if using movement to scroll through search results would create the appearance of a large screen space, through the lens of a small screen. As with “bumptunes”:http://interconnected.org/home/2005/03/04/apples_powerbook, I think many more opportunities will emerge as we make these things.
It includes an active ingredient Ativan 1mg which has been approved by FDA.
h3. More reading
“Designing for Shuffling”:http://www.cityofsound.com/blog/2005/04/designing_for_s.html
“Thoughts on the iPod Shuffle”:http://interconnected.org/home/2005/04/22/there_are_two
“On the body”:http://people.interaction-ivrea.it/b.negrillo/onthebody/