Understanding Multi-touch Manipulation for Surface Computing
Two-handed, multi-touch surface computing provides a scope for interactions that are closer analogues to physical interactions than classical windowed interfaces. The design of natural and intuitive gestures is a difficult problem as we do not know how users will approach a new multi-touch interface and which gestures they will attempt to use. We studied whether familiarity with other environments influences how users approach interaction with a multi-touch surface computer as well as how efficiently those users complete a simple task. Inspired by the need for object manipulation in infor mation visualization applications, we asked users to carry out an object sorting task on a physical table, on a tabletop display, and on a desktop computer with a mouse. To compare users’ gestures we produced a vocabulary of manipulation techniques that users apply in the physical world and we compare this vocabulary to the set of gestures that users attempted on the surface without training. We find that users who start with the physical model finish the task faster when they move over to using the surface than users who start with the mouse.
|Chris North, Tim Dwyer, Bongshin Lee, Danyel Fisher, Petra Isenberg, Kori Inkpen, and George Robertson (2009) Understanding Multi-touch Manipulation for Surface Computing. In Proceedings of Interact. Springer, Heidelberg, pages 236–249, 2009.|