Augmented reality experiments
I’m really not a fan of the goggle/glasses/helmet variety of AR, where the user wears something in front of their eyes that superimposes 3D objects into the physical world. In my experience this has been slow, inaccurate, cumbersome, headache inducing, the worst of VR plus a lot more problems. But AR is really interesting when it’s just a screen and a video feed, it becomes somehow magical: to see the same space represented twice: once in front of you, and once on screen with magical objects. I can imagine this working really well on mobile phones: the phone screen as magic lens to secret things.
On that afternoon we didn’t have a printer handy for making the AR marks, so we took to drafting them by hand, stencilling them off the screen with a pencil and inking them in. This hand-crafted process led to all sorts of interesting connections between the possibilities of craft and digital information.
We had lots of ideas about printing the markers on clothes, painting them on nails, glazing them into ceramics, etc. We confused ARtoolkit by drawing markers in perspective, and tried to get recursive objects by using screen based markers and video feedback.
Now as it turns out there is an entire research programme dedicated to looking at just this topic. “Variable Environment”:http://sketchblog.ecal.ch/variable_environment/ is a research programme involving partners like “ECAL”:http://www.ecal.ch/pages/home_new.asp and “EPFL”:http://www.epfl.ch. The great thing is that they are blogging the entire exploratory (they call it ‘sketch’) phase and curating the results online. The work is multi-disciplinary and involves architects, visual designers, computer scientists, interaction designers, etc. Check out the simple “AR ready products”:http://sketchblog.ecal.ch/variable_environment/archives/2006/07/ar_ready_simple.html, “sample applications”:http://sketchblog.ecal.ch/variable_environment/archives/2006/07/applications_1.html and “mixed reality tests”:http://sketchblog.ecal.ch/variable_environment/archives/2006/01/mixed_reality_t_1.html with “various patterns”:http://sketchblog.ecal.ch/variable_environment/archives/2006/03/test_01_pattern.html.
This seems to be part of a shift in the research community, to publishing ongoing and exploratory work online (championed by the likes of “Nicolas Nova”:http://tecfa.unige.ch/perso/staf/nova/blog/ and “Anne Galloway”:http://www.purselipsquarejaw.org/). Very inspirational.Posted in Architecture, Graphic design, Information design, Interaction design, Play, Research, Ubicomp2 Comments on Augmented reality experiments
You are herePosted in Graphic design, Information design, Mapping, Place, Research, Travel2 Comments on You are here
The address book desk
Underneath the desk I have stuck a grid of RFID tags, and on the top surface, the same grid of post-it notes. With the standard Nokia Service Discovery application it is possible to call people, send pre-defined SMSes or load URLs by touching the phone to each post-it on the desk. On the post-its themselves I have hand-written the function, message and the recipient. This is somewhat like a cross between a phone-book, to-do list and temporary diary, with notes, scribbles and tea stains alongside names.
Initial ideas were to spraypaint or silkscreen some of the touch icons to the desk surface, and I may well do that at some point. But for quick prototyping it made sense to use address labels or post-it notes that can be stuck, re-positioned and layered with hand-written notes.
This is an initial step in thinking about the use of RFID and mobile phones, a way of thinking through making. In many ways it is proving to be more inconvenient than the small screen (particularly with the occasionally unreliable firmware on this particular cover, I can’t speak for the production version). But it has highlighted some really interesting issues.
First of all it has brought to the forefront the importance of implicit habits. Initially, it took a real effort to think about the action of using the table as an interface: I would reach for the phone and press names to make a call, instead of placing it on the desk. But for some functions, such as sending an SMS, it has become more habitual.
SMSes have become more like ‘pings’ when very little effort is made to send them. At the same time they are more physically tangible: I rest the phone in a certain position on the desk and wait for it to complete an action. The most useful functions have been “I’m here” or “I’m leaving” messages to close friends.
I have had to consider the ‘negative space’ where the mobile must rest without any action. This space has potential be used for context information; a corner of the table could make my phone silent, another corner could change my presence online. Here it would be interesting to refer to Jan Chipchase’s ideas around centres of gravity and points of reflection, it’s these points that could be most directly mapped to behaviour. I’m thinking about other objects and spaces that might be appropriate for this, and perhaps around the idea of thoughtless acts.
If this was a space without wireless internet, I could also imagine this working very well for URLs: quick access to google searches, local services or number lookups, which is usually very tricky on a small screen. Here it would be interesting to think about how the mobile is used in non-connected places, such as the traditional Norwegian Hytte [pdf].
This process also raised a larger issue around the move towards tangible or physical information, which also implies a move towards the social. As I was making the layout of my address book and associated functions, I realised that maybe these things shouldn’t be explicit, visible, social objects. The arrangement of people within the grid makes personal sense; the placement is a personal preference and maps in certain ways to frequency and type of contact. But I wonder how it appears to other people when this pattern is exposed. Will people be offended by my layout? What if I don’t include a rarely called contact? Are there numbers I want to keep secret, hidden behind acronyms in the ‘Names’ menu?
It will be interesting to see how this plays out and changes over time, particularly in the reaction of others. I’ll post more about the use of NFC in other personal contexts in the near future.
h3. The making of…
The desk is made from 20 mm birch ply, surfaced in Linoleum. I stuck a single RFID to the underside, in the place that felt most natural. A 10 cm grid was worked out from that point, and the RFIDs were stuck in that grid, and the same worked out on top. If I were to re-build the desk with this project in mind, the tags should probably be layered close to the surface, between the ply and Linoleum. This would make them slightly more responsive to touch by giving them a larger read/write distance.
p(caption). Rewriteable 512 bit, Philips MiFare UL stickers.
p(caption). 10 cm grid of tags on the underside of the desk.
p(caption). Blank post-it notes on the surface, with the same grid.Posted in Interaction design, Project, Research, Ubicomp18 Comments on The address book desk
Nokia 3220 with NFC
h3. First impressions
Overall the interaction between phone and RFID tags has been good. The reader/writer is on the base of the phone, at the bottom. This seems a little awkward to use at first, but slowly becomes natural. When I have given it to others, their immediate reaction is to point the top of the phone to the tag, and nothing happens. There follows a few moments of explaining as the intricacies of RFID and looking at the phone, with it’s Nokia ‘fingerprint’ icon. As phones increasingly become replacements for ‘contactless cards’, it seems likely that this interaction will become more habitual and natural.
Once the ‘service discovery’ application is running, the read time from tags is really quick. The sharp vibrations and flashing lights add to a solid feeling of interacting with something, both in the haptic and visual senses. This should turn out to be a great platform for embodied interaction with information and function.
The ability to read and write to tags makes it potentially adaptive as a platform wider than just advertising or ticketing. As an interaction designer I feel quite enabled by this technology: the three basic functions (making phonecalls, going to URLs, or sending SMSs) are enough to start thinking about tangible interactions without having to go and program any Java midlets or server-side applications.
I’m really happy that Nokia is putting this technology into a ‘low-end’ phone rather than pushing it out in a ‘smartphone’ range. This is where there is potential for wider usage and mass-market applications, especially around gaming and content discovery.
I had some problems launching the ‘service discovery’ application. Sometimes it works, sometimes it doesn’t and it’s difficult to tell why this is. It would be great to be able to place the phone on the table, knowing that it will respond to a tag, but it was just a little too unreliable to do that without checking to see that it had responded. The version I have still says it’s a prototype, so this may well be sorted out by the released version.
Overall there is a lack of integration between the service discovery application and the rest of the system: Contacts, SMS archive and service bookmarks etc. At the moment we need to enter the application to write and manage tags, or to give a ‘shortcut’ to another phone, but it seems that, as with bluetooth and IR, this should be part of the contextual menus that appear under ‘Options’ within each area of the phone. There are also some infuriating prompts that appear when interacting with URL, more details below.
p(caption). The phone opens the ‘service discovery’ application whenever it detects a compatible RFID tag near the base of the phone (when the keypad lock is off). This part is a bit obscure: sometimes it doesn’t ‘wake up’ for a tag, and the application needs to be loaded before it will read properly. Once the application is open (about 2-3 seconds) the read time of the tags seems instantaneous.
p(caption). The shortcuts menu gives access to shortcuts. Confusingly, this is different from ‘bookmarks’ and the ‘names’ list on the phone, although names can be searched from within the application. I think tighter integration with the OS is called for.
p(caption). Shortcuts can be added, edited, deleted, etc. in the same way as contacts. They can be ‘Given’ to another phone or ‘Written’ to a tag.
p(caption). There are three kinds of shortcuts: Call, URL or SMS. ‘Call’ will create a call to a pre-defined number, ‘URL’ will load a pre-defined URL, and ‘SMS’ will send a pre-defined SMS to a particular number. This part of the application has the most room for innovative extensions: we should be able to set the state of the phone, change profiles, change themes, download graphics, etc. This can be achieved by loading URLs, but URLs and mobiles don’t mix, so why should we be presented with them, when there could be a more usable layer inbetween? There could also be preferences for prompts: at the moment each action has to be confirmed with a yes or a no, but in some secure environments it would be nice to be able to have a function launched without the extra button push.
p(caption). If a tag contains no data, then we are notified and placed back on the main screen (as happened when I tried to write to my Oyster card).
p(caption). If the tag is writeable we are asked which shortcut to write to the tag.
p(caption). When we touch a tag with a shortcut on it, a prompt appears asking for confirmation. This is a level of UI to prevent mistakes, and a certain level of security, but it also reduces the overall usability of the system. With URL launching, there are two stages of confirmation, which is infuriating. There needs to be some other mode of confirmation, and the ‘service discovery’ app needs to somehow be deeper in the system to avoid these double button presses.
p(caption). Lastly, there is a log of actions. Useful to see if the application has been reading something in your bag or wallet, without you knowing…Posted in Interaction design, Mobility, Research, Ubicomp, Usability20 Comments on Nokia 3220 with NFC
Embodied interaction in music
I too have “ditched”:http://interconnected.org/home/2005/04/12/my_40gb_ipod_has my large iPod for the “iPod Shuffle”:http://www.apple.com/ipodshuffle/, finding that “I love the white-knuckle ride of random listening”:http://www.cityofsound.com/blog/2005/01/the_rise_and_ri.html. But that doesn’t exclude the need for a better small-screen-based music experience.
The pseudo-analogue interface of the iPod clickwheel doesn’t cut it. It can be difficult to control when accessing huge alphabetically ordered lists, and the acceleration or inertia of the view can be really frustrating. The combinations of interactions: clicking into deeper lists, scrolling, clicking deeper, turn into long and tortuous experiences if you are engaged in any simultaneous activity. Plus its difficult to use through clothing, or with gloves.
h3. Music and language
My first thought was something “Jack”:http://www.jackschulze.co.uk and I discussed a long time ago, using a phone keypad to type the first few letters of a artist, album or genre and seeing the results in real-time, much like “iTunes”:http://www.apple.com/itunes/jukebox.html does on a desktop. I find myself using this a lot in iTunes rather than browsing lists.
“Predictive text input”:http://www.t9.com/ would be very effective here, when limited to the dictionary of your own music library. (I wonder if “QIX search”:http://www.christianlindholm.com/christianlindholm/2005/02/qix_from_zi_cor.html would do this for a music library on a mobile?)
Maybe now is the time to look at this as we see “mobile”:http://www.sonyericsson.com/spg.jsp?cc=gb&lc=en&ver=4000&template=pp1_loader&php=php1_10245&zone=pp&lm=pp1&pid=10245 “phone”:http://www.nokia.com/n91/ “music convergence”:http://www.engadget.com/entry/1234000540040867/.
h3. Navigating through movement
Since scrolling is inevitable to some degree, even within fine search results, what about using simple movement or tilt to control the search results? One of the problems with using movement for input is context: when is movement intended? And when is movement the result of walking or a bump in the road?
One solution could be a “squeeze and shake” quasi-mode: squeezing the device puts it into a receptive state.
Another could be more reliance on the 3 axes of tilt, which are less sensitive to larger movements of walking or transport.
I’m not sure about gestural interfaces, most of the prototypes I have seen are difficult to learn, and require a certain level of performativity that I’m not sure everyone wants to be doing in public space. But having accelerometers inside these devices should, and would, allow for the hacking together other personal, adaptive gestural interfaces that would perhaps access higher level functions of the device.
One gesture I think could be simple and effective would be covering the ear to switch tracks. To try this out we could add a light or capacitive touch sensor to each earbud.
With this I think we would have trouble with interference from other objects, like resting the head against a wall. But there’s something nicely personal and intimate about putting the hand next to the ear, as if to listen more intently.
h3. More knobs
Things that are truly analogue, like volume and time, should be mapped to analogue controls. I think one of the greatest unexplored areas in digital music is real-time audio-scrubbing, currently not well supported on any device, probably because of technical constraints. But scrubbing through an entire album, with a directly mapped input, would be a great way of finding the track you wanted.
Research projects like the “DJammer”:http://www.hpl.hp.com/research/mmsl/projects/djammer/ are starting to look at this, specifically for DJs. But since music is inherently time-based there is more work to be done here for everyday players and devices. Let’s skip the interaction design habits we’ve learnt from the CD era and go back to vinyl 🙂
h3. Evolution of the display
Where displays are required, I hope we can be free of small, fuzzy, low-contrast LCDs. With new displays being printable on paper, textiles and other surfaces there’s the possibility of improving the usability, readability and “glanceability” of the display.
We are beginning to see signs of this with this OLED display on this “Sony Network Walkman”:http://dapreview.net/comment.php?comment.news.1086 where the display is under the surface of the product material, without a separate “glass” area.
For the white surface of an iPod, the high-contrast, “paper-like surfaces”:http://www.polymervision.com/New-Center/Downloads/Index.html of technologies like e-ink would make great, highly readable displays.
So I really need to get prototyping with accelerometers and display technologies, to understand simple movement and gesture in navigating music libraries. There are other questions to answer: I’m wondering if using movement to scroll through search results would create the appearance of a large screen space, through the lens of a small screen. As with “bumptunes”:http://interconnected.org/home/2005/03/04/apples_powerbook, I think many more opportunities will emerge as we make these things.
h3. More reading
“Designing for Shuffling”:http://www.cityofsound.com/blog/2005/04/designing_for_s.html
“Thoughts on the iPod Shuffle”:http://interconnected.org/home/2005/04/22/there_are_two
“On the body”:http://people.interaction-ivrea.it/b.negrillo/onthebody/
Tangible and social interaction
h3. Brief history of interaction
(Based on Dourish, see reading recommendations, below)
Each successive development in computer history has made greater use of human skills:
* electrical: required a thorough understanding of electrical design
* symbolic: required a thorough understanding of the manipulation of abstract languages
* textual: text dialogue with the computer: set the standards of interaction we still we live with today
* graphic: graphical dialogue with the computer, using our spatial skills, pattern recognition, and motion memory with a mouse and keyboard
We have become stuck in this last model.
Interaction with computers has remained largely the same: desk, screen, input devices, etc. Even entirely new fields like mobile and iTV have followed these interaction patterns.
* Tangible: physical: having substance or material existence; perceptible to the senses
* Social: human and collaborative abilities, or ‘software that’s better because there’s people there’ (Definition from “Matt Jones”:http://blackbeltjones.typepad.com/work/ and “Matt Webb”:http://interconnected.org/home/)
Dourish notes in the first few chapters of his book that as interaction with computers moves out into the world, it becomes part of our social world too. The social and the tangible are intricately linked as part of “being in the world”.
What follows are examples of products or services we can use or buy right now. I’m specifically interested in the ways that these theories of ubiquitous computing and tangible interaction are moving out into the world, and the way that we can see the trends in currently available products.
I’m aware that there are also terrifically interesting things happening in research (eg the “Tangible Media Group”:http://tangible.media.mit.edu/) but right now I’m interested in the emergent things that start to happen effects of millions of people using things (like Flickr, weblogs, Nintendo DS, and mobile social software).
h3. Social trends on the web
On the web the current trend is building simple platforms that support complex social/human behaviour
* “Weblogs”:http://www.rebeccablood.net/essays/weblog_history.html, newsreaders and RSS: simple platform that has changed the way the web works, and supported simple social interaction (the basic building blocks of dialogue, or conversation)
* “Flickr”:http://www.flickr.com/: a simple platform for media/photo sharing: turned into a thriving community: works well with the web by allowing syndicated photos, bases the social network on top of a defined funciton
* Others include del.icio.us, world of warcraft, etc.
h3. Social mobile computing
On mobile platforms most of the exciting stuff is happening around presence, context and location
* “Familiar strangers”:http://berkeley.intel-research.net/paulos/research/familiarstranger/: stores a list of all the phones that you have been near in places that you inhabit, and then visualises the space around you according to who you have met before. “More mobile social software”:http://www.elasticspace.com/2004/06/mobile-social-software
* “Mogi”:http://www.thefeature.com/article?articleid=100501: location based game, but most interestingly supports different contexts of use: both at home in front of a big screen, and out on a small mobile screen.
h3. Social games
Interesting that games are moving away from pure immersive 3D worlds, and starting to devote equal attention to their situated, social context
* Nintendo DS: “PictoChat”:http://www.eurogamer.net/article.php?article_id=57287, local wireless networks that can be adapted for gameplay or communication (picture chatting included as standard)
* “Sissyfight”:http://www.sissyfight.com/: very simple social game structure, encourages human behaviour, insults
* “Habbohotel”:http://www.habbo.no/: simple interaction structures, (and fantastic attention to detail in “iconic representations”:http://www.scottmccloud.com/store/books/uc.html) support human desires. Now a very large company, in over 12 countries, based on the sales of virtual furniture
* “Singstar”:http://www.eurogamer.net/article.php?article_id=55470: entirely social game, about breaking social barriers and mutual humiliation: realtime analysis/visualisation of your voice actually makes you sing worse!
h3. Tangible games
* “Eyetoy”:http://www.eurogamer.net/article.php?article_id=4525: Brings the viewer into the screen, creates a “performative and social space”:http://www.prandial.com/archives/2005_01.html#009045, and allows communication via PS2
* “Dance Dance Revolution”:http://www.eurogamer.net/article.php?article_id=52731: taking the television into physical space
* “Nokia wave-messaging”:http://blackbeltjones.typepad.com/work/2004/06/motional_rescue.html: puts information back into space, and creates social and performative opportunities (Photo thanks to Matt Webb)
* “Yellow Arrow”:http://www.yellowarrow.org: puts digital information into city space, gives us a glimpse of the way that we might have more interaction with situated information in the future
There are also very interesting aspects of “gender”:http://foe.typepad.com/blog/2005/01/embodied_intera.html in all of this: this move towards the social implies a move towards the type of games/play that is seen more often in girls.
h3. Recommended reading
“Where the Action Is, Paul Dourish”:http://www.amazon.co.uk/exec/obidos/ASIN/0262541785/ (Read the first 3 chapters for a great introduction)
“Digital Ground, Malcolm McCullough”:http://www.amazon.co.uk/exec/obidos/ASIN/0262134357/ (Exploring the relationship between architectural and digital spaces)
“Physical Computing, O?Sullivan, Igoe”:http://www.amazon.co.uk/exec/obidos/ASIN/159200346X/ (Practical book on making physical computing devices)
“Smart Mobs, Howard Rheingold”:http://www.amazon.co.uk/exec/obidos/ASIN/0738208612/ (Exploring wider social aspects of mobile technology)
“The Humane Interface, Jef Raskin”:http://www.amazon.co.uk/exec/obidos/ASIN/0201379376/ (Covers screen based interaction, but has the best discussion on ‘modes’ of any book)
“Mind Hacks, Matt Webb and Tom Stafford”:http://www.mindhacks.com/ (Looks at our interaction with the world from the perspective of neuroscience, great introduction to ‘affordances’)Posted in Interaction design, Research, Social, Technology, Ubicomp3 Comments on Tangible and social interaction
These are some of my notes from Mikael Fernström‘s lecture at AHO.
The aim of the “Soundobject”:http://www.soundobject.org/ research is to liberate interaction design from visual dominance, to free up our eyes, and to do what small displays don’t do well.
Reasons for focusing on sound:
* Sound is currently under-utilised in interaction design
* Vision is overloaded and our auditory senses are seldom engaged
* In the world we are used to hearing a lot
* Adding sound to existing, optimised visual interfaces does not add much to usability
Sound is very good at attracting our attention, so we have alarms and notification systems that successfully use sound in communication and interaction. We talked about using ‘caller groups’ on mobile phones where people in an address book can be assigned different ringtones, and how effective it was in changing our relationship with our phones. In fact it’s possible to sleep through unimportant calls: our brains are processing and evaluating sound while we sleep.
One fascinating thing that I hadn’t considered is that sound is our fastest sense: it has an extremely high temporal resolution (ten times faster than vision), so for instance our ears can hear pulses at a much higher rate than our eyes can watch a flashing light.
h3. Disadvantages of sound objects
Sound is not good for continuous representation because we cannot shut out sound in the way we can divert our visual attention. It’s also not good for absolute display: pitch, loudness and timbre are relative to most people, even people that have absolute pitch can be affected by contextual sounds. And context is a big issue: loud or quiet environments affect the way that sound must be used in interfaces: libraries and airplanes for example.
There are also big problems with spatial representation in sound, techniques that mimic the position of sound based on binaural differences are inaccessible by about a fifth of the population. This perception of space in sound is also intricately linked with the position and movement of the head. “Some Google searches on spatial representation of sound”:http://www.google.com/search?&q=spatial+representation+of+sound. See also “Psychophysical Scaling of Sonification Mappings [pdf]”:http://sonify.psych.gatech.edu/publications/pdfs/2000ICAD-Scaling-WalkerKramerLane.pdf
‘Filling a bottle with water’ is a sound that could work as part of an interface, representing actions such as downloading, uploading or in replacement of progress bars. The sound can be abstracted into a ‘cartoonification’ that works more effectively: the abstraction separates simulated sounds from everyday sounds.
Mikael cites inspiration from “foley artists”:http://en.wikipedia.org/wiki/Foley_artist working on film sound design, that are experienced in emphasising and simplifying sound actions, and in creating dynamic sound environments, especially in animation.
A side effect of this ‘cartoonification’ is that sounds can be generated in simpler ways: reducing processing and memory overhead in mobile devices. In fact all of the soundobject experiments rely on parametric sound synthesis using “PureData”:http://www.puredata.org/: generated on the fly rather than using sampled sound files, resulting in small, fast, adaptive interface environments (sound files and the PD files used to generate the sounds can be found at the “Soundobject”:http://www.soundobject.org/ site).
One exciting and pragmatic idea that Mikael mentioned was simulating ‘peas in a tin’ to hear how much battery is left in a mobile device. Something that seems quite possible, reduced to mere software, with the accelerometer in the “Nokia 3220”:http://www.nokia.com/phones/3220. Imagine one ‘pea’ rattling about, instead of one ‘bar’ on a visual display…
h3. Research conclusions
The most advanced prototype of a working sound interface was a box that responded to touch, and had invisible soft-buttons on it’s surface that could only be heard through sound. The synthesised sounds responded to the movement of the fingertips across a large touchpad like device (I think it was a “tactex”:http://www.tactex.com/ device). These soft-buttons used a simplified sound model that synthesised _impact_, _friction_ and _deformation_. See “Human-Computer Interaction Design based on Interactive Sonification [pdf]”:http://richie.idc.ul.ie/eoin/research/Actions_And_Agents_04.pdf
The testing involved asking users to feel and hear their way around a number of different patterns of soft-buttons, and to draw the objects they found. See “these slides”:http://www.flickr.com/photos/timo/tags/soundobjects/ for some of the results.
The conclusions were that users were almost as good at using sound interfaces as with normal soft-button interfaces and that auditory displays are certainly a viable option for ubiquitous, especially wearable, computing.
h3. More reading
“Gesture Controlled Audio Systems”:http://www.cost287.org/
Spatial memory at Design Engaged 2004
Notes on two related projects:
h2. 1. Time that land forgot
* A “project”:http://www.elasticspace.com/timeland/ in collaboration with Even Westvang
* Made in 10 days at the Icelandic locative media workshop, summer 2004
* Had the intention of making photo archives and gps trails more useful/expressive
* Looked at patterns in my photography: 5 months, 8000 photos, visualised them by date / time of day. Fantastic resource for me: late night parties, early morning flights, holidays and the effect of midnight sun is visible.
* Looking now to make it useful as part of more pragmatic interface, to try other approaches less about the abstracted visualisation
* “info, details, research and source code”:http://www.elasticspace.com/2004/07/timeland
* “time visualisation”:http://www.elasticspace.com/images/photomap_times_large.gif
h2. 2. Marking in urban public space
I’ve also been mapping stickering, stencilling and flyposting: walking around with the camera+gps and “photographing examples of marking”:http://www.flickr.com/photos/timo/sets/8380/ (not painted graffiti).
This research looks at the marking of public space by investigating the physical annotation of the city: stickering, stencilling, tagging and flyposting. It attempts to find patterns in this marking practice, looking at visibility, techniques, process, location, content and audience. It proposes ways in which this marking could be a layer between the physical city and digital spatial annotation.
h3. Some attributes of sticker design
* *Visibility*: contrast, monochromatic, patterns, bold shapes, repetition
* *Patina*: history, time, decay, degredation, relevance, filtering, social effects
* *Physicality*: residue of physical objects: interesting because these could easily contain digital info
* *Adaptation and layout*: layout is usually respectful, innovative use of dtp and photocopiers, adaptive use of sticker patina to make new messages on top of old
Layers of information build on top of each other, as with graffiti, stickers show their age through fading and patina, flyposters become unstuck, torn and covered in fresh material. Viewed from a distance the patina is evident, new work tends to respect old, and even commercial flyposting respects existing graffiti work.
Techniques vary from strapping zip-ties through cardboard and around lampposts for large posters, to simple hand-written notes stapled to trees, and short-run printed stickers. One of the most fascinating and interactive techniques is the poster offering strips of tear-off information. These are widely used, even in remote areas.
Initial findings show that stickers don’t relate to local space, that they are less about specific locations than about finding popular locations, “cool neighbourhoods” or just ensuring repeat exposure. This is opposite to my expectations, and perhaps sheds some light on current success/failure of spatial annotation projects.
I am particularly interested in the urban environment as an interface to information and an interaction layer for functionality, using our spatial and navigational senses to access local and situated information.
There is concern that in a dense spatially annotated city we might have an overload of information, what about filtering and fore-grounding of relevant, important information? Given that current technologies have very short ranges (10-30mm), we might be able to use our existing spatial skills to navigate overlapping information. We could shift some of the burden of information retrieval from information architecture to physical space.
I finished by showing this animation by Kriss Salmanis, a young Latvian artist. Amazing re-mediation of urban space through stencilling, animation and photography. (“Un ar reizi naks tas bridis” roughly translates as “And in time the moment will come”.
p(footnote). Graffiti Archaeology, Cassidy Curtis
p(footnote). Street Memes, collaborative project
p(footnote). Spatial annotation projects list
p(footnote). Nokia RFID kit for 5140
p(footnote). Spotcodes, High Energy Magic
p(footnote). ?Mystery Meat navigation?, Vincent Flanders
p(footnote). RDF as barcodes, Chris Heathcote
p(footnote). Implementation: spatial literature
p(footnote). Yellow Arrow
Design Engaged 2004
We are all sat around a table in Amsterdam, at Design Engaged 2004. There are lots of photos going up to Flickr, and here are my notes.
h2. Ben Cerveny
* The growth of the soil
* How do we comprehend complexity
* How do we build structures around complex information
* Accreting meta-data: GPS data, descriptive information
* Break down of material as it hits the soil
* Soup, tags, condensed and distilled meta objects
h3. Self organisation
* sorting mechanisms, affinity browsers, related, filtering, emergent relationships, interrelationships
* How do we conceive a metaphor for building these processes? A structure that is meaningful for the users.
* Application design: movement through states of application: to tending to a flow of processes
* Tending to meta-data is a growth process
* DLA diffusion limited aggregation, natural process model
* The relationships between metadata can be visualised as this * Should model metadata using plant models: plant models have existed for eons, basic structures for material
h3. Rules for expression
* L-systems growth, mimics biological rulesets
* Map rule-sets in metadata onto L-systems, affinity rules
* Branching tree structures could be used to make metadata more useful
h3. Roots and Feeds
* RSS feeds, a root system, aggregator has roots, to the surface of a newsreader
h3. Structural information
* After applying rules of expression (algorithms, l-systems) we could see differences in the way that the plant has evolved
* A “botany” of these different structures: smaller, larger clusters, structures.
h3. Cultivation as culture
* From a user perspective the idea of cultivation: users can actually affect change: can breed your own searches, using searches generationally, using own adapted metaphors for new contexts
* Mix and match mechanisms or instruments (specific rule-sets) move expressions and apply them to different rule-sets
* Don’t have to understand genetics, but we have found use for plants for generations
* User doesn’t need to know mechanisms, just ability to make changes and view outcomes
h3. Tending the garden
* Incredible complexity, incredible diversity
* Not intimidated by the complexity of the garden
* Present similar tools to tend to data
* Casey Reas: organic information design
* Thinkmap, physical simulation systems
* Mitchell Resnick: Turtles Termites, Traffic Jams
* Matt J: Does it rely on visual metaphors: how do we get people to cultivate rather than consume?
h2. Thomas Van Der Wal
* Synching feeling
h3. Everything fit in our brain
* then libraries
* then digital bits
* then putting everything in one place
* Our information on our pdas, cellphones, somewhere
* The dream is that we have accurate information at our disposal when we need it
* Personal info-cloud
* Local info-cloud: should it be located?
* External info-cloud: things you don’t know about
* How do users use information?
* Device versus network?
* Our networked space, that exists out in space
* Usable: syncing between two devices: calendar, address book, to do list
* Dodgy: documents, media maps, web-based info, multiple devices
* Personal version control: different devices have different versions
* Personal categorisation:
h3. Standard metadata for personal info-cloud
* content description
* use type (eg)
* instruction: destroy, revise in 6 months
* object type:
* categories: not a structured system, but hackable flat data
h3. Actual solutions
* Spotlight (Apple Tiger)
* MIT Project Oxygen
h3. Possible/partial solutions
* Script aggregation by metadata tag
* Publish to private/public location in RSS
* Rsynk and CVS
* Groove (Windows)
* Quicksilver (Mac)
h2. Adam Greenfield
* All watched over by machines of loving grace
* Some ethical guidelines for user experience in ubiquitous computing environments
* Ubicomp is coming: IPV6 6.5×10 to the 23 addresses for every square metre on the planet
* Moving from describing to prescribing
* Technological artefacts are too dismissive of people
* Someone to watch over me: attractive as well as scary
h3. Default to harmlessness
* must ensure user’s physical psychic and financial safety
* must go well beyond graceful degredation
* faults must result in safety
h3. Be self disclosing
* Contain provisions for immediate, transparent querying of ownership, use, capabilities, etc.
* Seamlessness is optional
* Analogue of broadcast station identification or military IFF
* Web derived model for user-consent: cannot carry over to ubicomp, would be too intrusive to have to approve each and every disclosure of information in four space
h3. Be conservative of face
* ubiquitous systems are always already social systems: they must not unnecessarily embarras, himiliate or shame
* Goes beyond formal information-privacy concerns
* Prospect of being nakedly accountable to an inseen omipresent network
h3. Be conservative of time
* Must not introduce undue complications into ordinary operations
* Adult, competent users understand adequately what they want, shouldn’t introduce barriers
* Potential conflict with principle 1
h3. Be deniable
* Should be able to opt-out, anytime, anywhere, any process
* Critically: the ability to say no, without sacrificing anything but the ability to use whatever usage
* The “safe word” concept may find an application here
* Fabio: what about gossip
* Chris: surely there’s human responsibility
* Tom C: Social control includes humiliation and embarrasment
* Molly: systems for shaming: can be institutionalised and applied in problem places: difference between smart and smartass. Haven’t got good enough at modelling situations in order to get this right.
h2. Stefan Smagula
* Teaching and writing about interaction design
h2. Mike Kuniavsky
* Writing about ubicomp, society and social
* Material products areform from social values
* Products affect how we think
* The pattern is “a recognition of the complexity, unpredictability, confusion of the world”
* The framework of thought of the last 600 years is coming to an end
* “by dividing the world into smaller pieces, ways can be found to explain it”: this method is waning
* Communication and transportation has been the key driver of this change
* Shown people (designers?) how complex life is
* Most people don’t know what to do about this complexity
* At the end of the prescriptive rationalist vision of the world
* It is our job as designers to recognise these ideas: “design is a projection of people’s ideals onto product”
* Past the confusion of postmodernism: the complexity hasn’t been branded yet, hasn’t been given a core set of ideas
* Book: Human built world
* The complexity of the world is an uncomfortably bright light, people turn away: designers can make it manageable
* Go to the light of compexity!
* Adam: are we up against biological limits: are we wired to deal with things in a linear way? Yes: physiological limits: 7 +-2.
* Ben: we conceive as a subtractive process: a mental scene out of an excess of input: we have a body of linear tools to process. There is a realisation that we are non-linear systems: technology is becoming us, and the other way around.
* Matt: we can learn complexity way more than we realise: tests show that we subconsciously learn complexity beyond language and rational thought
* Magical thinking is not wrong: all our models are wrong
* Tom C: Looking at people as shearing layers of perception and cognition
h2. Remon Tijssen
* Behaviours, tactility and graphics
* Tensionfield between playfulness and functionality
h2. David Erwin
* The funnel
* Serial, parallel and optional interfaces
h2. Peter Boersma
* Transactional interfaces
* ezGov uses IBMs RUP
* RUP is weak in user-experience
* Added StUX, definitions of deliverables for user experience
h2. Dan Hill
* Self centred design
* Not selfish design
* Background: adaptive design, design as social process, inspiration from vernacular architecture, hackability, allowing and encouraging people to make technology what they want to be
* Inspiration from trip to US
* Assumption that UCD is generally a good thing
* The focus on usability has distracted people: it has become an end in itself
* UCD manifests itself in usability, at the expense of usefulness
* Cultural and social products: massive variation of use across the globe
* Products most innovative at BBC/music: audioscrobbler/lastFM: intense meaning in the patterns it generates. More innovative than iTunes music store. Steam: setting reminders for radio stations: hacked third party product, BBC is trying to support this innovation.
* This innovation is coming from non-designers
* Veen: Amateurised design: the most interesting design on the web: Shirky: Situated software
* Always consider a thing in it’s next larger context: Eliel Saarinen: useful piece of design process. Chair, room, house, city.
* A lot of information about the self, coming out of these systems
* Audioscrobbler: looking at ones music, bookmarks, photos, lunches, weblog posts, gps co-ordinates: how does this affect habits?
* Pace of development: what can be done on the web.
* Self-knowledge and enlightenment: how does it affect one’s life
* The practice and focus of design is moving towards behaviour
* This is early adopter activity, this is geeky, high barrier to entry, it requires code to make these things. It’s self limiting: only certain kind of people can make these products.
* Scaleability problems: resilience: lack of reliability of iterative development, when will we be at the stage when we can rely on things working?
* BBC, radio broadcasting needs to be resilient: public service
* Database design and scaleability: Flickr doesn’t need to be normalised
* Common appeal of these things is self-limiting: too much systems level thinking.
* Moving into a space where products are social, and can have social meaning, and thus be socially harmful
* People’s assumption and experiences are based on context
* Need to be more rigourous about understanding social patterns
* audioscrobbler is not good at classical music
* Designers and researchers need better understanding of each other
* Designers are at their most useful when they are enabling adaptive design
* Using ethnography within a design process, look at long-term ethnographic process: hooking it into the rapid prototyping of the adaptive design world
* There is the value of sociology here. Ethno-methodology, Heidegger
* Book: Where the action is, Dourish.
* Social systems work well when there is accountability
* Building things where this also builds an account of the building
* Place and space: place being about social structures
* Embodiment: Appropriating products, building social meanings into products
* Accountability: part of the action is a documentation of the action (Dourish). Is ‘view source’ accountability?
* Book: Presentation of self: Irvine Goffman
h2. Matt Webb
* Neuroscience and interaction design
* This is really mostly psychology
* Game: remembering animals
* Light comes from top left
* Easier to react in the direction that things approach you from
* Dialogue boxes, work with natural directions
* We follow human eye direction, not robot eye direction, pulling a lever is faster when eyes point in that direction
* We respond the same to arrows as we do to gaze
* All that neuroscience has done is to confirm what we know from psychology
* 3 types of object, animate, inanimate and tool
* 3 zones: graspable, peripersonal The schema of the body is extended by the held tools
* Our body space is quite mutable: space on a screen becomes the space represented by the body, anything which moves as part of your hand becomes part of your grasp, there’s an amount of time that this takes to understand this, learning process and experience
* Grasping has as much primacy as a cup itself: so “sit down” or “chair” are equivalent in the brain
* If we see or say grasping, or looking at coffee cup shows
* “What to do with too much information is the great riddle of our time” a* Mapping observed phenomena to the science of jetstreams, same thing will happen to neuroscience.
h2. John Poisson
* The stretch time conundrum
* Sony is a huge force: vaunted to villified in three short decades
* Loss of brand value: products are not meeting user expectations
* Sony founders have changed, directions have changed
* One of the problem is in the fact that it’s japanese: basic simple cultural processes
* Hikaru dorodango: process refinement as creative expression: successively sculpting and crafting mud balls into spheres
* 3 interconnected languages are undocumentably mixed
* Languages are connected to neurological development: learning japanese at an early age increases the threshold of tolerance of the pain of complexity: Kanji pain begets user pain.
* At first thought that it was a problem of language, but then realised this increased tolerance of complexity pain.
* Sony “iPod killer” is a user-experience nightmare, but for japanese it’s not too complex
* There’s an overall acceptance of complexity in Japan
* Pattern based learning: origami: 48 steps of process, more complex than interfaces
* Stretch time: at 3o’clock on the Sony campus everyone stops, music plays and everyone is encouraged to stretch.
* Process is good: start with rice cookers and end up with transistors: releasing lots of stuff and then seeing what works. But there are a lot more misses than hits at the moment
h2. Sanjay Khanna
* Kurt Vonnegut in “Cold Turkey”
* Mike: intended effects are insignificant compared with the emergent effects, just noise compared to the overall outcomes
h2. Niels Wolf
* Intro to JXTA
* Works on every network device
* Allows control over your data, sharing, peer to peer backup
* Implemented in many languages: including python
* Assigned a unique number, which works across IP, bluetooth, mobile rendezvous, etc.
* Everybody becomes a server if no other can be found
h2. Molly Wright Steenson
* All hail the vast comforting suburb of the soul
* Lots of research into garden cities
* Worried that the future is going to be boring
* Closing off some avenues for development by focusing on urban environments
* What are the constraints that define a suburb?
h2. Jack Schulze
* Mapping and looking
* Lots of cool stuff: no notes.
h2. Matthew Ward
* Questioning the commodification of space
* We are social, spatial, temporal beings
h3. What were the conditions for the rise of these spatial technologies
* 2001 descrambling of GPS
* FCC policy to make sure 911 callers can be located
* Ubiquity of mobile phones
* If we don’t move away from the “where’s my nearest pizza” we are going to get really bored really soon
* Differential space: socio-spatial differences are emphasised and celebrated
* Iain Borden: Skateboarding
* “social space is a social product.” “Our task now is to construct everyday life, to produce it, consciously to create it, boredom is pregnant with desires, frustrated desires” Lefebvre.
h2. Chris Heathcote
* Nuts and bolts, how to use location
* Location is co-ordinates
* Location is names and titles
* Location is also near Matt Webb, or near my iBook: relative position might be more useful way of thinking
* Physical augmentation: using, abusing, changing where they live
* Visual design: Buddy finder on mobile phones: spatially false, chart junk
* Context awareness is really hard:
* What happens when you get rid of the maps?
* Lots more cool stuff that I didn’t take notes on…
h2. Matt Jones
* Nokia: Insight and foresight
* A hard problem: “Ubicomp is hard, understanding people, context and the world is hard, getting computers to handle everyday situations is hard, and expectations are set way too high.” Gene Becker, Fredshouse.net
* Next-gen mobile: big screens, more whizzy features, but we still have the same old messy world
* A modest start: being in the world instead of in front of the screen
* 3220: 5140: power up covers with new capabilities
* 3220: LED displays with accelerometers and thus motion capture
* Where the action is: This ignores 99% of our daily lives
* dance dance revolution and eyetoy: new world
* 5140: first RFID reader phone
* New ways of using mobiles with touch based tech
* easy and concrete access to services and repeat functions
* transfer of digital items between devices as simple as a gesture of giving
* in the future also fast and convenient local payment and ticketing: fast, easy way of getting settings and services
* When you count all the steps to make simple actions are about 100 actions: to find settings, set up the human modem thing
* Touch actions are potentially two orders of complexity less: into 1 action
* LAunched active cover with NFC: near field communication: philips, sony, visa, samsung: nfcforum.org
* Pairing things up, putting things together (how is this different from BT? passive chips)
* Prototype things!
* NFC is a touch based RFID technology
* Putting the information into the tag: can contain more than an ID
* Close mapping to physical objects: Dourish
* NFC active objects will have mixed spirit world of objects having magic behind them: permitted moves for games, origins of objects, spime like stuff,
* One to one mapping: multiple digital meanings on objects
* it’s not a one-way world: these things are re-writeable: secular isn’t the dominant way of thinking
* Now that we can give objects spirit world, semiotic, actions
* Into fetish objects: auspicious computing, unique wooden balls (minority report)
* Friendster: a game of how many connections. Turning into an info-fetish physical game
* – phones are precious, tags are not
* – throwaway, data detritus, spime spume
* + programmatic product life-cycle
* + audit trails for trash
* + automation of recycling
* WWF: sustainability at the speed of light
h3. Long now, (Stewart Brand)
* Sometimes technology can disrupt these layers
h2. Fabio Sergio
* From collision to convergence
* How I learned to stop worrying and watch tv on my mobile phone
* 2001: who the hell would want to watch tv on a mobile?
* 2003: using mobile to watch big brother from the car
* consultants: timeliness, context sensitivity, self-expression, immediacy, relevance
* People rely on their connected devices to fill-in interstitial time slots
* Armed with this notion outlets aquired content and chopped it into 3-5 minute videos
* The end result is too much navigation and not enough content, undermines the concept of “snacking”. The navigation has become the experience
* Navigation is not bad per-se, the web is arguably built on it
* Flow: where the consumer is completely engaged with interaction
* Mobile content experiences happen in contexts that basically negate the ability to focus
* How do you access video: at the moment through a browser
* Big Brother: lessons learnt
* Always on-ness: there is aways something new happening: marshall mcluhan meets orwell
* Something might happen at any time
* Action can be just a video call away
* Easy to get into the flow of what’s happening
* Cut to measure: as little or as long as you want
* Conversation-based: you can keep hearing when you can’t watch: don’t need to look at the screen
* Why should the browser and media player be two different applications? should probably be one.
* People need context medium content, probably in this order
* The handset should be a remote control: as much as possible make navigation resident on teh device
* Content should be snackish: but should be grouped
* The experience should be around the on/off switch
h2. Timo Arnall
* “Presentation and notes”:http://www.elasticspace.com/2004/11/spatial-memory-design-engaged
h2. Sunday discussion
* Brief: design a ticket machine that also allows city navigation and takes care of tourists and busy commuters equally, that doesn’t have a screen
* Alternative brief: A permanent tag large enough to contain digital info, that could be unobtrusively attached to anything in public space
* Mechanisms for friendly denial
h3. I’m lost: design a physical pathway which
* includes the idea of signs to explain features of teh environment to the unmediated
* which could serve as a compensation or apology for people denied in the ubiquitous sense
* which was distinctively local and amsterdamish
* includes infrastructure
* poetics and emotional enhancements required
Overheard somewhere at the bar: anthropology/ethnography is this year’s library science: another new/old juxtaposition. Not that I agree.Posted in Adaptive design, Conferences, Experience design, Interaction design, Mapping, Place, Research, Social, Technology
Art + communication 2004
“Even”:http://www.polarfront.org and I presented our “Timeland”:http://www.elasticspace.com/timeland/ project during the 3 day conference and exhibition.
I have made a large “photo set”:http://www.flickr.com/photos/timo/sets/18602/ at Flickr, and we have been using the tag “art+communication”:http://www.flickr.com/photos/tags/artcommunication/ for collaborative documentation.
The highlight of the event was a trip to Limbazi, for the opening of “Piens”:http://locative.x-i.net/piens/info.html the “milk” project, looking at the personal stories around the mapping of milk routes through the EU. It was really good to see GPS being used as a storytelling tool, a way of opening up personal stories in the documentary process.
A big thankyou to the RIXC lot, and everyone involved.Posted in Art, Conferences, Mapping, Media, Research