h3. First impressions
Overall the interaction between phone and RFID tags has been good. The reader/writer is on the base of the phone, at the bottom. This seems a little awkward to use at first, but slowly becomes natural. When I have given it to others, their immediate reaction is to point the top of the phone to the tag, and nothing happens. There follows a few moments of explaining as the intricacies of RFID and looking at the phone, with it’s Nokia ‘fingerprint’ icon. As phones increasingly become replacements for ‘contactless cards’, it seems likely that this interaction will become more habitual and natural.
Once the ‘service discovery’ application is running, the read time from tags is really quick. The sharp vibrations and flashing lights add to a solid feeling of interacting with something, both in the haptic and visual senses. This should turn out to be a great platform for embodied interaction with information and function.
The ability to read and write to tags makes it potentially adaptive as a platform wider than just advertising or ticketing. As an interaction designer I feel quite enabled by this technology: the three basic functions (making phonecalls, going to URLs, or sending SMSs) are enough to start thinking about tangible interactions without having to go and program any Java midlets or server-side applications.
I’m really happy that Nokia is putting this technology into a ‘low-end’ phone rather than pushing it out in a ‘smartphone’ range. This is where there is potential for wider usage and mass-market applications, especially around gaming and content discovery.
I had some problems launching the ‘service discovery’ application. Sometimes it works, sometimes it doesn’t and it’s difficult to tell why this is. It would be great to be able to place the phone on the table, knowing that it will respond to a tag, but it was just a little too unreliable to do that without checking to see that it had responded. The version I have still says it’s a prototype, so this may well be sorted out by the released version.
Overall there is a lack of integration between the service discovery application and the rest of the system: Contacts, SMS archive and service bookmarks etc. At the moment we need to enter the application to write and manage tags, or to give a ‘shortcut’ to another phone, but it seems that, as with bluetooth and IR, this should be part of the contextual menus that appear under ‘Options’ within each area of the phone. There are also some infuriating prompts that appear when interacting with URL, more details below.
p(caption). The phone opens the ‘service discovery’ application whenever it detects a compatible RFID tag near the base of the phone (when the keypad lock is off). This part is a bit obscure: sometimes it doesn’t ‘wake up’ for a tag, and the application needs to be loaded before it will read properly. Once the application is open (about 2-3 seconds) the read time of the tags seems instantaneous.
p(caption). The shortcuts menu gives access to shortcuts. Confusingly, this is different from ‘bookmarks’ and the ‘names’ list on the phone, although names can be searched from within the application. I think tighter integration with the OS is called for.
p(caption). Shortcuts can be added, edited, deleted, etc. in the same way as contacts. They can be ‘Given’ to another phone or ‘Written’ to a tag.
p(caption). There are three kinds of shortcuts: Call, URL or SMS. ‘Call’ will create a call to a pre-defined number, ‘URL’ will load a pre-defined URL, and ‘SMS’ will send a pre-defined SMS to a particular number. This part of the application has the most room for innovative extensions: we should be able to set the state of the phone, change profiles, change themes, download graphics, etc. This can be achieved by loading URLs, but URLs and mobiles don’t mix, so why should we be presented with them, when there could be a more usable layer inbetween? There could also be preferences for prompts: at the moment each action has to be confirmed with a yes or a no, but in some secure environments it would be nice to be able to have a function launched without the extra button push.
p(caption). If a tag contains no data, then we are notified and placed back on the main screen (as happened when I tried to write to my Oyster card).
p(caption). If the tag is writeable we are asked which shortcut to write to the tag.
p(caption). When we touch a tag with a shortcut on it, a prompt appears asking for confirmation. This is a level of UI to prevent mistakes, and a certain level of security, but it also reduces the overall usability of the system. With URL launching, there are two stages of confirmation, which is infuriating. There needs to be some other mode of confirmation, and the ‘service discovery’ app needs to somehow be deeper in the system to avoid these double button presses.
p(caption). Lastly, there is a log of actions. Useful to see if the application has been reading something in your bag or wallet, without you knowing…
This work explores the visual link between information and physical things, specifically around the emerging use of the mobile phone to interact with RFID or NFC. It was a presentation and poster at Design Engaged, Berlin on the 11th November 2005.
As mobile phones are increasingly able to read and write to RFID tags embedded in the physical world, I am wondering how we will appropriate this for personal and social uses.
I’m interested in the visual link between information and physical things. How do we represent an object that has digital function, information or history beyond it’s physical form? What are the visual clues for this interaction? We shouldn’t rely on a kind of mystery meat navigation (the scourge of the web-design world) where we have to touch everything to find out it’s meaning.
This work doesn’t attempt to be a definitive system for marking physical things, it is an exploratory process to find out how digital/physical interactions might work. It uncovers interesting directions while the technology is still largely out of the hands of everyday users.
h3. Reference to existing work
p(caption). Click for larger version.
The inspiration for this is in the marking of public space and existing iconography for interactions with objects: push buttons on pedestrian crossings, contactless cards, signage and instructional diagrams.
This draws heavily on the substantial body of images of visual marking in public space. One of the key findings of this research was that visibility and placement of stickers in public space is an essential part of their use. Current research in ubicomp and ‘locative media’ is not addressing these visibility issues.
There is also a growing collection of existing iconography in contactless payment systems, with a number of interesting graphic treatments in a technology-led, vernacular form. In Japan there are also instances of touch-based interactions being represented by characters, colours and iconography that are abstracted from the action itself.
Sketching and development revealed five initial directions: circles, wireless, card-based, mobile-based and arrows (see the poster for more details). The icons range from being generic (abstracted circles or arrows to indicate function) to specific (mobile phones or cards touching tags).
Arrows might be suitable for specific functions or actions in combinations with other illustrative material. Icons with mobile phones or cards might be helpful in situations where basic usability for a wide range of users is required. Although the ‘wireless’ icons are often found in current card readers, they do not successfully indicate the touch-based interactions inherent in the technology, and may be confused with WiFi or Bluetooth. The circular icons work at the highest level, and might be most suitable for generic labelling.
For further investigation I have selected a simple circle, surrounded by an ‘aura’ described by a dashed line. I think this successfully communicates the near field nature of the technology, while describing that the physical object contains something beyond its physical form.
In most current NFC implementations, such as the 3220 from Nokia and many iMode phones, the RFID reader is in the bottom of the phone. This means that the area of ‘activation’ is obscured in many cases by the phone and hand. The circular iconography allows for a space to be marked as ‘active’ by the size of the circle, and we might see it used to mark areas rather than points. Usability may improve when these icons are around the same size as the phone, rather than being a specific point to touch.
h3. Work in progress
This is early days for this technology, and this is work-in-progress. There is more to be done in looking at specific applications, finding suitable uses and extending the language to cover other functions and content.
Until now I have been concerned with generic iconography for a digitally augmented object. But this should develop into a richer language, as the applications for this type of interaction become more specific, and related to the types of objects and information being used. For example it would be interesting to find a graphic treatment that could be applied to a Pokemon sticker offering power-ups as well as a bus stop offering timetable downloads.
I’m also interested in the physical placement of these icons. How large or visible should they be? Are there places that should not be ‘active’? And how will this fit with the natural, centres of gravity of the mobile phone in public and private space.
I’ll expand on these things in a few upcoming projects that explore touch-based interactions in personal spaces.
Feel free to use and modify the icons, I would be very interested to see how they can be applied and extended.
h3. Visual references
Oyster Card, Transport for London.
eNFC, Inside Contactless.
ExpressPay, American Express.
MiFare, various vendors.
Suica, JR, East Japan Railway Company.
RFID Field Force Solutions, Nokia.
NFC shell for 3220, Nokia.
ERG Transit Systems payment, Dubai.
Various generic contactless vendors.
Contactless payment symbol, Mastercard.
Open Here, Paul Mijksenaar, Piet Westendorp, Thames and Hudson, 1999.
Understanding Comics, Scott McCloud, Harper, 1994
I too have “ditched”:http://interconnected.org/home/2005/04/12/my_40gb_ipod_has my large iPod for the “iPod Shuffle”:http://www.apple.com/ipodshuffle/, finding that “I love the white-knuckle ride of random listening”:http://www.cityofsound.com/blog/2005/01/the_rise_and_ri.html. But that doesn’t exclude the need for a better small-screen-based music experience.
The pseudo-analogue interface of the iPod clickwheel doesn’t cut it. It can be difficult to control when accessing huge alphabetically ordered lists, and the acceleration or inertia of the view can be really frustrating. The combinations of interactions: clicking into deeper lists, scrolling, clicking deeper, turn into long and tortuous experiences if you are engaged in any simultaneous activity. Plus its difficult to use through clothing, or with gloves.
h3. Music and language
My first thought was something “Jack”:http://www.jackschulze.co.uk and I discussed a long time ago, using a phone keypad to type the first few letters of a artist, album or genre and seeing the results in real-time, much like “iTunes”:http://www.apple.com/itunes/jukebox.html does on a desktop. I find myself using this a lot in iTunes rather than browsing lists.
“Predictive text input”:http://www.t9.com/ would be very effective here, when limited to the dictionary of your own music library. (I wonder if “QIX search”:http://www.christianlindholm.com/christianlindholm/2005/02/qix_from_zi_cor.html would do this for a music library on a mobile?)
Maybe now is the time to look at this as we see “mobile”:http://www.sonyericsson.com/spg.jsp?cc=gb&lc=en&ver=4000&template=pp1_loader&php=php1_10245&zone=pp&lm=pp1&pid=10245 “phone”:http://www.nokia.com/n91/ “music convergence”:http://www.engadget.com/entry/1234000540040867/.
h3. Navigating through movement
Since scrolling is inevitable to some degree, even within fine search results, what about using simple movement or tilt to control the search results? One of the problems with using movement for input is context: when is movement intended? And when is movement the result of walking or a bump in the road?
One solution could be a “squeeze and shake” quasi-mode: squeezing the device puts it into a receptive state.
Another could be more reliance on the 3 axes of tilt, which are less sensitive to larger movements of walking or transport.
I’m not sure about gestural interfaces, most of the prototypes I have seen are difficult to learn, and require a certain level of performativity that I’m not sure everyone wants to be doing in public space. But having accelerometers inside these devices should, and would, allow for the hacking together other personal, adaptive gestural interfaces that would perhaps access higher level functions of the device.
One gesture I think could be simple and effective would be covering the ear to switch tracks. To try this out we could add a light or capacitive touch sensor to each earbud.
With this I think we would have trouble with interference from other objects, like resting the head against a wall. But there’s something nicely personal and intimate about putting the hand next to the ear, as if to listen more intently.
h3. More knobs
Things that are truly analogue, like volume and time, should be mapped to analogue controls. I think one of the greatest unexplored areas in digital music is real-time audio-scrubbing, currently not well supported on any device, probably because of technical constraints. But scrubbing through an entire album, with a directly mapped input, would be a great way of finding the track you wanted.
Research projects like the “DJammer”:http://www.hpl.hp.com/research/mmsl/projects/djammer/ are starting to look at this, specifically for DJs. But since music is inherently time-based there is more work to be done here for everyday players and devices. Let’s skip the interaction design habits we’ve learnt from the CD era and go back to vinyl 🙂
h3. Evolution of the display
Where displays are required, I hope we can be free of small, fuzzy, low-contrast LCDs. With new displays being printable on paper, textiles and other surfaces there’s the possibility of improving the usability, readability and “glanceability” of the display.
We are beginning to see signs of this with this OLED display on this “Sony Network Walkman”:http://dapreview.net/comment.php?comment.news.1086 where the display is under the surface of the product material, without a separate “glass” area.
For the white surface of an iPod, the high-contrast, “paper-like surfaces”:http://www.polymervision.com/New-Center/Downloads/Index.html of technologies like e-ink would make great, highly readable displays.
So I really need to get prototyping with accelerometers and display technologies, to understand simple movement and gesture in navigating music libraries. There are other questions to answer: I’m wondering if using movement to scroll through search results would create the appearance of a large screen space, through the lens of a small screen. As with “bumptunes”:http://interconnected.org/home/2005/03/04/apples_powerbook, I think many more opportunities will emerge as we make these things.
h3. More reading
“Designing for Shuffling”:http://www.cityofsound.com/blog/2005/04/designing_for_s.html
“Thoughts on the iPod Shuffle”:http://interconnected.org/home/2005/04/22/there_are_two
“On the body”:http://people.interaction-ivrea.it/b.negrillo/onthebody/
“Yayhooray”:http://www.yayhooray.com re-launched with new features and functions, and what looks like a rich environment for writing, browsing and discussion. As far as I know it’s the first forum built to use the buddy list as a form of content filtering: to increase the signal to noise ratio in the content.
Here’s a bit of Yayhooray history:
Built by “skinnyCorp”:http://www.skinnycorp.com in 2001 as an experiment in online community. Along with “o8”:http://126.96.36.199/search?q=cache:1nd31d-exeAJ:www.cotworld.com/main/journal.asp%3FJournal_ID%3D539 it soaked up some of the users from “Dreamless”:http://www.dreamless.org/, the ‘design forum’ that reached critical mass and became its own “worst enemy”:http://www.shirky.com/writings/group_enemy.html at the end of 2000.
Originally it was built to manage itself through a levels system; allowing users to earn administration responsibilities (similar to implicit moderation systems employed by other forums like “metafilter”:http://www.metafilter.com). It worked well at a small scale but led to cliques forming around the early adopter’s own social networks.
The levels system evolved into a points system, allowing anyone to award points to anyone, on a limited (one a day, one person a week) basis, similar to karma systems adopted at “slashdot”:http://slashdot.org/ and “kuro5hin”:http://www.kuro5hin.org/. This briefly led to multiple account scams, and ended up in the ‘point orgy’ where ‘points were swapped rather than STDs’.
In the end, both systems were abused, subverted and widely discussed, often taking over from normal discussions and swamping the site with controversy. Many regulars left to other places, some seeing closed, invite only communities (like “humhum”:http://humhum.be) as the only option left for humane, creative discussion.
Yayhooray, in this latest version, is setting itself up to deal with these problems by globally filtering the content through a buddy system, rather than explicitly administering the content and user reputations. This applies to the entire site including the categorised discussions, blogging interface, links database, buddy lists and search.
The most obvious feature is a meter on the left hand side, which allows 4 different filtering settings:
* you and only you
* you and your buddies
* you, your buddies, and their buddies
* every user on Yay Hooray!
This applies a filter to the entire site, including user lists and search, which took me a little by suprise. The site is effectively meshing off into small, interlinked communities of interest, based on individual social networks and collaborative filtering.
In my case, buddies are mostly people that I have met, talked to, or seen invest time into making things: initiating photographic threads, dealing with social issues, administering creative collaborations, giving good design critique…
Logging in now (using ‘you, your buddies, and their buddies’) I see a small subset of the overall forum, focused on these parts of the discussion. Given that the filter is so prominent and usable, it is also possible to jump out into the chaos of the full site.
There is also a useful, if somewhat harsh, system that censors posts and links based on a list of people that you class as ‘enemies’! Being based on proper XHTML, CSS and DOM technologies means that censored posts are easily toggled on and off.
On the downside there will most likely be confusion and clashes when different groups that don’t mesh with each other, but have completely different experiences of the place, come together in a single thread. There will also be more repetition, or double posts of content gets repeated amongst different groups that are out of sync by virtue of the filters.
To fully appreciate this you need to invest time in it, and to build up a network of trusted buddies. YH can be hyperactive and annoying, it must be difficult for a new user to become engaged. The filters are perhaps most useful for long-time users looking for relief from ‘worst enemy’ problems.
Because it has become an adaptive social platform, and has the potential to be subverted and shaped into many different kinds of system, I will reserve judgement for now, and make a new report soon.
David’s reference to 18 points as the minimum size equates to 18 pixels if you are coming from a web background.
On some iTV projects I have pushed the type down to 16 pixels, but be very careful about colours and contrast, and enquire about the production path to air: if the work is going to be transferred via DV tape, squeezed through an old composite link, or online-edited with high compression, then you might want to leave type as large as possible.
In some cases ? such as using white text on a red background ? you can add a very subtle black shadow to the type, which will help stop colour bleed and crawling effects. Even if you dislike drop-shadow effects, it will still look flat and lovely on a broadcast monitor.
Safe areas need to be taken with a pinch of salt. The default safe areas in most editing and compositing software date from years ago before the widespread use of modern, widescreen televisions.
Try extending the safe area for non-essential text in interactive projects, and consult broadcaster guidelines for their widescreen policies: many channels now broadcast in 14:9 to terrestrial boxes, and offer options to satellite and cable viewers.
The largest problem is that widescreen viewers often crop the top and bottom of the image by setting their TV to crop 4:3 to 16:9. Some cable/satellite companies remove the left and right of the image to crop 16:9 to 4:3 for non-widescreen viewers, leaving us only a tiny, safe rectangle in the centre of the image to work with.
There are also excellent documents on picture standards from the BBC.
But this is one thing I don’t understand: according to the BBC: “Additional [20 or 26 horizontal] pixels are not taken into account when calculating the aspect ratio, but without them images transferred between systems will not be the correct shape.” Can anyone confirm that this is the case for PAL images?
The Design of Everyday Things
Things That Make Us Smart
The Design of Sites: Patterns, Principles, and Processes for Crafting a Customer-Centered Web Experience
User-Centred Web Design
Contextual Design: A Customer-Centered Approach to Systems Designs
User and Task Analysis for Interface Design
Shaping Web Usability: Interaction Design in Context
Submit Now: Designing Persuasive Websites
Web Accessibility for People With Disabilities
Design by People for People: Essays on Usability
Designing Web Usability
Web Site Usability
Don’t Make Me Think!: A Common Sense Approach to Web Usability
Pollen Mobile develops location-based services for the consumer and business markets. Mamjam is their first product: a location based, social entertainment service based on Short Messaging Service (SMS) messages. It enables people in the same venue to chat with each other by sending text messages from their mobile phones.
Pollen approached us with a very broad intention to use SMS to drive social interaction and entertainment in new ways.
We initially developed three quirky ideas based on playground games, internet chat, and community storytelling that we presented as the basis for discovering business goals and user-needs.
After our initial brainstorm, we initiated a more rigorous user-centred, interaction design process that is detailed in this case-study.
h2. Handsets & Networks
We found several pivotal issues we needed to resolve: SMS has extremely limited functions; with few opportunities to create rich, engaging, extended interactions.
Mobile phone handsets provide no navigation between multiple messages, no indication of user status or location, and have no practical means of viewing session history. Users are accustomed to using SMS for quick functional communication, and extended contact with friends. They certainly do not rely on messages for any kind of complex interaction.
Every transaction between user and server on a mobile phone is a sessionless operation. Each message contains only the time it was sent, the number it was sent from and the content of the message .
Unlike http systems, the server cannot rely on location and session information being stored in the message address. This is complex from a user experience perspective because people are used to responses exhibited by systems that do carry session information and behave quite differently .
SMS messages are managed by the networks with cells, each cell carries messages particular to that region. Cells are notoriously unreliable, and we found that it was common for messages to hang in the system for over ten minutes. This presented some serious problems. Satisfying communications rely on a high level of continuity, and the timing between messages is a critical indicator of the emotional state of your chatting partner.
Mamjam’s service is location based: users are in contact with other users in the same area. However the existing (second generation) handsets cannot determine location, and although locations are triangulated by the network, this information is not publicly available. The location thus had to be manually provided by the user; in a way that then could be usefully interpreted by the server.
Researching and developing a reliable language for users to identify their location became central to the interaction design problem.
Many competing SMS services are currently internet-based: requiring a signup for services from a web site, rather than directly from handsets.
A system like this could conceivably be built without the use of modes . From the users perspective a modeless system could be overly complex and exhausting: every message must somehow include exact commands and instructions for the server. But a modeless system is very attractive from a technical perspective: the server is more likely to correctly interpret instructions.
We consulted with Pollen and selected SMS users to draw up several personas and scenarios. This included contextual enquiry, business goals and user-requirements gathering. We identified the following requirements:
* Users must be able to join the service immediately, not just from a website prior to use.
* The service should accommodate both new and returning users.
* Users are likely to be exposed to the service through all sorts of channels, and therefore signing up should accommodate all points of entry.
* The structure should be designed to accommodate expansion of the service.
* The basic structure of the handshake should carry to other SMS systems Pollen may choose to develop.
h3. First Iteration
The initial interaction architecture outlines our first intentions for the system. (For legal reasons we can’t include the full size diagram.)
The system works in a similar way to internet based chat rooms, connecting users who are ‘online’ at the same time, with the extra dimension that they are in the same physical place. Mamjam supports private, one-to-one communications only: users can’t shout to groups or broadcast messages. Once a user has found a chatting partner the system simply directs the text traffic between them until one party decides to pursue some one else, or signs off.
This structure required users to enter a lot of information about themselves before they could initiate contact with one another. We felt this was valuable in order to reduce the interaction load while chatting. This also resulted from a (perhaps misguided) adherence to the ‘internet chat room’ model.
This system was implemented on Pollen’s test servers, and we organised user-testing sessions. This revealed several problems:
The sign up process was off putting. Users motivation for this product is for entertainment and social contact: they weren’t happy to tolerate a lengthy sign-up process. This architecture required four messages for a new user to sign up. In some cases the user would be spending the equivalent of a 10 minute voice-call before they had connected with someone to chat. It was clear that the service needed to offer a quick method of signing up, perhaps at the expense of more advanced features.
In trying to optimise the system for both new and advanced users; signing up for the first time required a different interaction process from signing up for a second time. There were also several different methods of identifying your location to anticipate every possible user-interaction. There were thus four or five possible entry points into the system. This caused more modal problems than anybody anticipated; the SMS server had to process language and match patterns in an almost infinite realm of possibilities.
h3. Second Iteration
It became clear that the three biggest problems for the social interaction process were:
* Aligning the systems perception of user-context with actual user-context.
* Ensuring users have an accurate perception of the system state.
* Maintaining a rich connection between users, allowing them to interpret and react to one another accurately.
This discrepancy between user perception and system perception can be referred to as ‘slippage’. Slippage is most problematic during the initial handshake when the user is most insecure about their request and about the system itself.
Text messages to and from SMS servers rarely arrive as punctually as they seem to in normal use. This meant it was possible for one of two users, both having agreed to start chatting, to reject the other on the basis that they had failed to reply to their confirmation. In fact the rejected user had replied with confirmation, but their message had been delayed. The message would then arrive with the first user who had since moved to a new part of the interaction process. Their reply could potentially interrupt another process or get lost in the system, confusing and infuriating both users. Serious slippage!
We also found, as predicted, that users did not read back through their old messages. Some phones have a very limited capacity for storing messages and no phone facilitates simple navigation of previous messages, so the current message was the only one through which we could usefully rely upon for users to react to.
The second interaction architecture was developed with the problems described above in mind. Some changes have been made to the system since, mostly around modal issues, and the commands through which users communicate with the server. Although there are still issues regarding slippage, the second iteration makes this much less of a problem. The system is basically modeless, except for the first transaction. All users (new and existing) enter the system in the same way, new users are chatting within two messages, and existing users are potentially chatting after their first message.
h2. In Use
Mamjam is now fully operational, spinning off other services based on the basic interaction architectures we designed for the initial chat service.
h3. Extended Services
In a recent, typical promotion, at the Mood Bar in Carlisle, Mamjam sent a message to people who had Mamjamed there, offering them a discounted drink if they showed their mobile at the bar. The conversion rate from message sent to offer redeemed was 30%.
h3. Building relationships, Community and Storytelling
Having heard that a large number of people were texting their ex-partners late at night; under the influence, Mamjam sent a message asking for their own dating disasters. 13% of people responded with their own story by SMS; 50% of those responding within the first hour.
These users were not given incentives like promotional offers, the call to action was not a simple generic mechanic like reply YES or NO; it was much more involved. Users were required to read and understand the message received, then conceive and craft a response to fit into 160 characters. Yet the response was high and the quality of response excellent.
h3. Stimulating usage
By reminding BT users of a free messaging offer, the objectives are to stimulate Mamjaming outside of the locations in which they first Mamjamed.
p(quote). Message: Spice up your text life for FREE! Mamjam is still FREE to receive for BT users. To chat now just reply with mamjam and your location eg MAMJAM LONDON.
7% of the database of BT users read the message, and then decided to log on to Mamjam. Between them on that day they sent 3,400 chat messages.
h3. Some usage statistics
* First time Mamjam users begin chatting by sending only 2 SMS messages.
* Users are matched with someone within 120 seconds of logging onto the service for the first time.
* The average Mamjam user sends and receives 24 SMS messages per session.
* The top 10% of users send 60 SMS per month and generate an additional 72 outbound messages. Generating an additional ?18 for the network operators.
* The top 50% of users send 20 SMS per month and generate an additional 24 outbound messages, generating ?6.30 of revenue for the network operator.
* Repeat usage: 30% of daily users are repeat users.
We think that the best solution to this particular service has been found, given the limitations detailed above.
There are obvious and not-so-obvious limitations to SMS communications. The most notable limitations are the handsets continuing to rely on short messaging rather than a more advanced chat service, and the network operators inability to develop services and platforms outside of their own internal structures.
This research and product development has generated a lot of further ideas for asynchronous communication structures, and communication solutions for packet switched networks for mobile devices.
 Some phones support greater functionality than others, Mamjam needed to support a broad demographic so only the most bottom-line functionality was available to us.
When sending a message from a server it can be set to “Flash” mode, causing the message to open in the users phone immediately. Some cells also support a “broadcast to cell” function, whereby a single message can be sent to all phones within that cell. This function is expensive and only available to phones on a given network. back
 Information transferred with HTTP is also sessionless, but browsers and servers are afforded with functionality to help them overcome modal issues, like cookies, history bars and links for example. There are other interface restrictions to consider regarding the manipulation of text like the absence of cutting and pasting for example. back
h2. References and Links
At the time of writing the Mamjam numbers are 82888 (BT/Vodafone) or 07970 158 158 (all other networks). Just send any text message to sign up and test it for yourself.
h2. Professional Credits
h3. Interaction design
Jack Schulze Adi Nachman Timo Arnall
h3. Technical Architecture
h3. Information Design