Overall the interaction between phone and RFID tags has been good. The reader/writer is on the base of the phone, at the bottom. This seems a little awkward to use at first, but slowly becomes natural. When I have given it to others, their immediate reaction is to point the top of the phone to the tag, and nothing happens. There follows a few moments of explaining as the intricacies of RFID and looking at the phone, with it’s Nokia ‘fingerprint’ icon. As phones increasingly become replacements for ‘contactless cards’, it seems likely that this interaction will become more habitual and natural.
Once the ‘service discovery’ application is running, the read time from tags is really quick. The sharp vibrations and flashing lights add to a solid feeling of interacting with something, both in the haptic and visual senses. This should turn out to be a great platform for embodied interaction with information and function.
The ability to read and write to tags makes it potentially adaptive as a platform wider than just advertising or ticketing. As an interaction designer I feel quite enabled by this technology: the three basic functions (making phonecalls, going to URLs, or sending SMSs) are enough to start thinking about tangible interactions without having to go and program any Java midlets or server-side applications.
I’m really happy that Nokia is putting this technology into a ‘low-end’ phone rather than pushing it out in a ‘smartphone’ range. This is where there is potential for wider usage and mass-market applications, especially around gaming and content discovery.
I had some problems launching the ‘service discovery’ application. Sometimes it works, sometimes it doesn’t and it’s difficult to tell why this is. It would be great to be able to place the phone on the table, knowing that it will respond to a tag, but it was just a little too unreliable to do that without checking to see that it had responded. The version I have still says it’s a prototype, so this may well be sorted out by the released version.
Overall there is a lack of integration between the service discovery application and the rest of the system: Contacts, SMS archive and service bookmarks etc. At the moment we need to enter the application to write and manage tags, or to give a ‘shortcut’ to another phone, but it seems that, as with bluetooth and IR, this should be part of the contextual menus that appear under ‘Options’ within each area of the phone. There are also some infuriating prompts that appear when interacting with URL, more details below.
This work explores the visual link between information and physical things, specifically around the emerging use of the mobile phone to interact with RFID or NFC. It was a presentation and poster at Design Engaged, Berlin on the 11th November 2005.
As mobile phones are increasingly able to read and write to RFID tags embedded in the physical world, I am wondering how we will appropriate this for personal and social uses.
I’m interested in the visual link between information and physical things. How do we represent an object that has digital function, information or history beyond it’s physical form? What are the visual clues for this interaction? We shouldn’t rely on a kind of mystery meat navigation (the scourge of the web-design world) where we have to touch everything to find out it’s meaning.
This work doesn’t attempt to be a definitive system for marking physical things, it is an exploratory process to find out how digital/physical interactions might work. It uncovers interesting directions while the technology is still largely out of the hands of everyday users.
Reference to existing work
The inspiration for this is in the marking of public space and existing iconography for interactions with objects: push buttons on pedestrian crossings, contactless cards, signage and instructional diagrams.
This draws heavily on the substantial body of images of visual marking in public space. One of the key findings of this research was that visibility and placement of stickers in public space is an essential part of their use. Current research in ubicomp and ‘locative media’ is not addressing these visibility issues.
There is also a growing collection of existing iconography in contactless payment systems, with a number of interesting graphic treatments in a technology-led, vernacular form. In Japan there are also instances of touch-based interactions being represented by characters, colours and iconography that are abstracted from the action itself.
Sketching and development revealed five initial directions: circles, wireless, card-based, mobile-based and arrows (see the poster for more details). The icons range from being generic (abstracted circles or arrows to indicate function) to specific (mobile phones or cards touching tags).
Arrows might be suitable for specific functions or actions in combinations with other illustrative material. Icons with mobile phones or cards might be helpful in situations where basic usability for a wide range of users is required. Although the ‘wireless’ icons are often found in current card readers, they do not successfully indicate the touch-based interactions inherent in the technology, and may be confused with WiFi or Bluetooth. The circular icons work at the highest level, and might be most suitable for generic labelling.
For further investigation I have selected a simple circle, surrounded by an ‘aura’ described by a dashed line. I think this successfully communicates the near field nature of the technology, while describing that the physical object contains something beyond its physical form.
In most current NFC implementations, such as the 3220 from Nokia and many iMode phones, the RFID reader is in the bottom of the phone. This means that the area of ‘activation’ is obscured in many cases by the phone and hand. The circular iconography allows for a space to be marked as ‘active’ by the size of the circle, and we might see it used to mark areas rather than points. Usability may improve when these icons are around the same size as the phone, rather than being a specific point to touch.
Work in progress
This is early days for this technology, and this is work-in-progress. There is more to be done in looking at specific applications, finding suitable uses and extending the language to cover other functions and content.
Until now I have been concerned with generic iconography for a digitally augmented object. But this should develop into a richer language, as the applications for this type of interaction become more specific, and related to the types of objects and information being used. For example it would be interesting to find a graphic treatment that could be applied to a Pokemon sticker offering power-ups as well as a bus stop offering timetable downloads.
I’m also interested in the physical placement of these icons. How large or visible should they be? Are there places that should not be ‘active’? And how will this fit with the natural, centres of gravity of the mobile phone in public and private space.
I’ll expand on these things in a few upcoming projects that explore touch-based interactions in personal spaces.
Feel free to use and modify the icons, I would be very interested to see how they can be applied and extended.
Oyster Card, Transport for London.
eNFC, Inside Contactless.
ExpressPay, American Express.
MiFare, various vendors.
Suica, JR, East Japan Railway Company.
RFID Field Force Solutions, Nokia.
NFC shell for 3220, Nokia.
ERG Transit Systems payment, Dubai.
Various generic contactless vendors.
Contactless payment symbol, Mastercard.
Open Here, Paul Mijksenaar, Piet Westendorp, Thames and Hudson, 1999.
Understanding Comics, Scott McCloud, Harper, 1994
I too have ditched my large iPod for the iPod Shuffle, finding that I love the white-knuckle ride of random listening. But that doesn’t exclude the need for a better small-screen-based music experience.
The pseudo-analogue interface of the iPod clickwheel doesn’t cut it. It can be difficult to control when accessing huge alphabetically ordered lists, and the acceleration or inertia of the view can be really frustrating. The combinations of interactions: clicking into deeper lists, scrolling, clicking deeper, turn into long and tortuous experiences if you are engaged in any simultaneous activity. Plus its difficult to use through clothing, or with gloves.
Music and language
My first thought was something Jack and I discussed a long time ago, using a phone keypad to type the first few letters of a artist, album or genre and seeing the results in real-time, much like iTunes does on a desktop. I find myself using this a lot in iTunes rather than browsing lists.
h3. Navigating through movement
Since scrolling is inevitable to some degree, even within fine search results, what about using simple movement or tilt to control the search results? One of the problems with using movement for input is context: when is movement intended? And when is movement the result of walking or a bump in the road?
One solution could be a “squeeze and shake” quasi-mode: squeezing the device puts it into a receptive state.
Another could be more reliance on the 3 axes of tilt, which are less sensitive to larger movements of walking or transport.
I’m not sure about gestural interfaces, most of the prototypes I have seen are difficult to learn, and require a certain level of performativity that I’m not sure everyone wants to be doing in public space. But having accelerometers inside these devices should, and would, allow for the hacking together other personal, adaptive gestural interfaces that would perhaps access higher level functions of the device.
One gesture I think could be simple and effective would be covering the ear to switch tracks. To try this out we could add a light or capacitive touch sensor to each earbud.
With this I think we would have trouble with interference from other objects, like resting the head against a wall. But there’s something nicely personal and intimate about putting the hand next to the ear, as if to listen more intently.
Things that are truly analogue, like volume and time, should be mapped to analogue controls. I think one of the greatest unexplored areas in digital music is real-time audio-scrubbing, currently not well supported on any device, probably because of technical constraints. But scrubbing through an entire album, with a directly mapped input, would be a great way of finding the track you wanted.
Research projects like the DJammer are starting to look at this, specifically for DJs. But since music is inherently time-based there is more work to be done here for everyday players and devices. Let’s skip the interaction design habits we’ve learnt from the CD era and go back to vinyl :)
Evolution of the display
Where displays are required, I hope we can be free of small, fuzzy, low-contrast LCDs. With new displays being printable on paper, textiles and other surfaces there’s the possibility of improving the usability, readability and “glanceability” of the display.
We are beginning to see signs of this with this OLED display on this Sony Network Walkman where the display is under the surface of the product material, without a separate “glass” area.
For the white surface of an iPod, the high-contrast, paper-like surfaces of technologies like e-ink would make great, highly readable displays.
So I really need to get prototyping with accelerometers and display technologies, to understand simple movement and gesture in navigating music libraries. There are other questions to answer: I’m wondering if using movement to scroll through search results would create the appearance of a large screen space, through the lens of a small screen. As with bumptunes, I think many more opportunities will emerge as we make these things.
David’s reference to 18 points as the minimum size equates to 18 pixels if you are coming from a web background.
On some iTV projects I have pushed the type down to 16 pixels, but be very careful about colours and contrast, and enquire about the production path to air: if the work is going to be transferred via DV tape, squeezed through an old composite link, or online-edited with high compression, then you might want to leave type as large as possible.
In some cases – such as using white text on a red background – you can add a very subtle black shadow to the type, which will help stop colour bleed and crawling effects. Even if you dislike drop-shadow effects, it will still look flat and lovely on a broadcast monitor.
Safe areas need to be taken with a pinch of salt. The default safe areas in most editing and compositing software date from years ago before the widespread use of modern, widescreen televisions.
Try extending the safe area for non-essential text in interactive projects, and consult broadcaster guidelines for their widescreen policies: many channels now broadcast in 14:9 to terrestrial boxes, and offer options to satellite and cable viewers.
The largest problem is that widescreen viewers often crop the top and bottom of the image by setting their TV to crop 4:3 to 16:9. Some cable/satellite companies remove the left and right of the image to crop 16:9 to 4:3 for non-widescreen viewers, leaving us only a tiny, safe rectangle in the centre of the image to work with.
There are also excellent documents on picture standards from the BBC.
But this is one thing I don’t understand: according to the BBC: “Additional [20 or 26 horizontal] pixels are not taken into account when calculating the aspect ratio, but without them images transferred between systems will not be the correct shape.” Can anyone confirm that this is the case for PAL images?