The Future of Computing is Programming for Individuals in their Environments

In his 2012 paper, “What next, Ubicomp? Celebrating an intellectual disappearing act”, Gregory D. Abowd puts forward an exciting vision for the next phase of computing, where ubiquitious computing (ubicomp), having ceased to exist as a distinct area of research, has combined with everyday HCI to become “the intellectual domain of all computing”. This change, he proposes, means that we can and should now develop computing applications that exist not in a single device but instead are designed to have an awareness of the location, context and physicality of the individual.

The idea is perhaps best summarised by his phrase “from programming environments to programming environments“, which emphasises that even in today’s world of multi-functional apps, we remain locked into thinking of a smartphone screen as the major component of its experience. But, as the author points out, this is at odds with the reality of today’s world: that thanks to commodity sensors and actuators, wearable computing, and the explosion of everyday computing devices and the Internet of things it is possible (albeit not easy) to design and build a computing experience taking into account the individual as a person in an environment, not just some fingertips on the other side of screen. The 2-dimensional GUI is just a subset of the phone’s capabilities, not the total environment we should be developing for.

The paper is a well-written, contextual overview of the ubiquitous computing space, looking back to Weiser’s landmark 1991 article, The Computers for the 21st Century, to see that in fact a great deal of his predictions have come true – his ideas of “tabs, pads and boards” having been loosely realised as smartphones, tablets and digital whiteboards respectively. The paper then sets outs to chart the history of developments since then, through HyperCard and PARCTab to today’s technologies such as Microsoft Surface, Arduino and the numerous technologies that “makers” experiment with. He also points out one branch of research that has stalled – that of context-aware computing projects such as iCAP. But where the paper is most interesting is where Abowd casts his eye to the future and what we might face next.

Developers of Google Glass wearable technology out on the streets
Developers of Google Glass wearable technology out on the streets – the tip of the iceberg of programming for individuals in their environment

He observes that augmented reality and location-aware applications are just the tip of the iceberg, and says that what is needed is for it to become as easy for developers to develop for real-world environments as it is for them to develop for 2D screens today. He highlights a number of trends that provide the platform from which ubiquitous computing is now able to “go mainstream”, namely:

  • the outsourcing of storage and of processing to the cloud
  • the move from local processing to remote services
  • the ability to access the collective experiences of others, in real-time
  • the ability to obtain near-instantaneous answers to just about every question we can ponder.

Abowd explains that computing cycles and storage are effectively being divorced from our interaction devices, and predicts that input and output will almost certainly follow suit, so that in the future the division between the computing device and the individual will no longer be needed. Abowd’s vision of future computing is a hybrid, conjoined experience between man and machine, not in the cyborg sense suggested twenty years ago, but in a holistic, environment-aware fashion. It’s a compelling vision that I find very easy to buy into.

For me, this paper sparked a number of connections to my own thoughts on technology in everyday life (a few years ago I ran a blog exploring the impacts of technology on society, Human 2.0). The paper sparked many research ideas as I read it – for example it would be interesting to explore the extent to which people view their data and devices as an extension of their “digital self”, or to explore the value of giving people a context-based view into their digital lives (as outlined in my article “Why Files Need to Die”). Another idea would be to explore through hackdays or similar events how developers might be empowered to develop in a more environment-centric manner.

The paper contains more ideas than I can do justice to here, suffice to say that I thoroughly recommend reading it as an introduction to the rich seam of ubiquitous computing research taking place today.


The paper I have chosen as an interesting (and potentially troubling/invasive) example of an ubicomp application is Encountering SenseCam: Personal Recording Technologies in Everyday Life by Nyugen et al.

  • This is interesting because it raises interesting new social etiquette questions such as “If I can see it, does that entitle me to record it?” or “Do others need to give me permission to record their conversations with me?” or “Do I need to tell them I’m recording?”. I’ve previously blogged a related question: Is photography a human right?
  • It’s also very practical because it introduces a new technology, the wearable, always-on camera, and outlines some of the benefits of lifelogging (introduction here), but also tests people’s reaction to that technology.
  • It closely relates to an excellent book I’ve read on related research in this field: Your Life Uploaded by Gordon Bell and Jim Gemmell and builds on their earlier research.

Leave a Reply

Your email address will not be published. Required fields are marked *