Getting out of flatland: adventures in tangible computing

tangible-computing

This week I’ve been researching tangible computing starting with what at first sight was a fairly simple question; erm…what is it? Unfortunately, I soon started to come across a range of other related and overlapping terms such as wearable computing, computer supported cooperative work, context based computing, ubiquitous-computing, social-computing, embodied computing….

Just as I was beginning to drown in HCI word soup, I stumbled across the second chapter of Dourish’s Where the Action Is which summarises the evolution of tangible computing. Dourish begins by discussing Weiser’s vision for invisible computing and moves on to discuss Wellner’s “Digital desk” which was developed in 1993. This is a “computationally enhanced desktop supporting interaction with both paper and electronic documents” (Wellner 1993). This system began to envisage the integration of the electronic and real worlds, enabling handwritten documents to be manipulated electronically. It’s amazing to think that 23 years later (a lifetime in the fast evolving world of HCI) I was watching the video and feeling envious of Wellner’s system. The Digital Desk still has far more integration between bites and atoms than the modern working environment with its two basic conversion options – scanning and printing.

In 1997 Brigg Ullner’s thesis (with Hirohi Ishii at MIT Media Lab) showcased a range of interactions between the real and the digital world where the manipulation of real world objects facilitated a range of useful digital operations. This is the point where the potential for computing to be based on the manipulation of real world objects began to become really clear (or should that be tangible?). Metadesk is a type of GIS where physical icons or “phicons” could be scrolled, rotated and pinched together to manipulate the location, orientation and scale of a digitally projected map. The fact that not all gestures could be meaningfully interpreted by the digital map (such as rotation of the phicons) suggested the benefit of “constraint” which Ulmer demonstrates in further prototypes. Tangible computing has of course continued to develop since then with MIT Media Lab still leading the field. Some of the other examples from MIT include Materiable (a shape changing pin board interface), Uni morph (corded UIs with intuitive pinching and knotting functionality to enable dimming of lights and microphones) and Physical telepresence (shape capture and display for embodied, computer-mediated remote collaboration) and loads, loads more.

Something that I think is clear in many of these examples is the sense of playfulness and fun. I would argue that this is something that is inherent in the discipline of tangible computing and not simply down to novelty value. Maybe bringing computing into the real world and enabling us to interact with it using real world items like phicons and above all our own hands and bodies is just inherently more satisfying and enjoyable than interacting through a GUI and keyboard / mouse.

All very well, but you may have noticed that I still haven’t actually answered the initial question, so what is tangible computing? … back to Dourish. “The essence of tangible computing lies in the way in which it allows computation to be manifest for us in the everyday world” [P.42]. Dourish also identifies three key features of Tangible computing: no single point of control or interaction, its non-sequential nature and finally that the physical properties of the interface suggest its use. I thought one of the most interesting points of the chapter was the contrast between Virtual Reality and Augmented Reality. While Virtual Reality takes the user and immerses him in a computational world, Augmented reality on the other hand takes the computer and immerses it into the real world. To me the possibilities of augmented reality which tangible computing exemplifies is not only more fun, it’s potentially both useful and “invisible”. Weisner would approve…

This week we were asked to select three papers from tangible computing that interested us.

  1. If Your Mind Can Grasp it, Your Hands will Help. This paper deal with learning (and forgetting) – something that as a reformed language teacher and someone with bitter experience of Ebbinghaus’ Forgetting Curve (successfully replicated here) I have an longstanding interest in. This paper discusses the implications of tangible vs 2D data for learning and forgetting. It’s application to tangible computing is clear.
  1. DataSpoon: Overcoming Design Challenges in Tangible and Embedded Assistive Technologies. I have to admit that it was the name DataSpoon that attracted me to this paper but I discovered a more than passing similarity to a project that I worked on at the recent NHS Hack day in Newcastle where we hacked commercially available fitness trackers to measure the extent of dystonic movements in patients with cerebral palsy . You can listen to me talking about the project at the NHS hack day here. I’ve chosen this paper to see if there’s anything that I can gleam from it that will be relevant to the work which we’ve carried on beyond the hack day and which I’m still involved in.
  1. Tangible Viewpoint: Getting out of Flatland in Desktop environments. I picked this one as it seemed to extend ideas from Wellner’s Digital desk and the applications to our working environment. I also really liked the title and stole it for the title of this blog post.

All papers came from the Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction

Leave a Reply

Your email address will not be published. Required fields are marked *