Toward Motor–Intuitive Interaction Primitives for Touchless Interfaces

Chattopadhyay, D.
Extended Abstract Proceedings of the Tenth International Conference on Interactive Tabletops and Surfaces, 445–450 ACM.

November 2015

To design intuitive, interactive systems in various domains, such as health, entertainment, or smart cities, researchers are exploring touchless interaction. Touchless systems allow individuals to interact without any input device—using freehand gestures in midair. Gesture-elicitation studies focus on generating userdefined interface controls to design touchless systems. Interface controls, however, are composed of primary units called interaction primitives—which remain little explored. For example, what touchless primitives are motor-intuitive and can unconsciously use our preexisting sensorimotor knowledge (such as visual perception or motor skills)? Drawing on the disciplines of cognitive science and motor behavior, my research aims to understand the perceptual and motor factors in touchless interaction with 2D user interfaces (2D UIs). I then aim to apply this knowledge to design a set of touchless interface controls for large displays.

Motor-Intuitive Interactions Based on Image Schemas: Aligning Touchless Interaction Primitives with Human Sensorimotor Abilities

Chattopadhyay, D., & Bolchini, D.
Journal Paper Special Issue on Intuitive Interactions, Interacting With Computers, 27(3), 327–343.

May 2015

image

Abstract

Elicitation and evaluation studies investigated intuitiveness of touchless gestures but did not operationalize intuitiveness. For example, studies found that users fail to make accurate 3D strokes as interaction commands. But this phenomenon remains unexplained. In this paper, we first explain how making accurate 3D strokes is generally unintuitive, because it exceeds our sensorimotor knowledge. We then introduce motor-intuitive, touchless interaction that uses sensorimotor knowledge by relying on image schemas. Specifically, we propose an interaction primitive—mid-air, directional strokes—based on space schemas up–down and left–right. In a controlled study with large displays, we found that biomechanical factors affected directional strokes. Strokes were efficient (0.2 s) and effective (12.5∘ angular error), but affected by directions and length. Our work operationalized intuitive touchless interaction using the continuum of knowledge in intuitive interaction, and demonstrated how user performance of a motor-intuitive, touchless primitive based on sensorimotor knowledge (image schemas) is affected by biomechanical factors.

Exploring Perceptual and Motor Gestalt in Touchless Interactions with Distant Displays

Chattopadhyay, D.
Extended AbstractProceedings of the Ninth International Conference on Tangible, Embedded and Embodied Interaction, 433–436, ACM.

January 2015

image

Abstract

Markerless motion-sensing promises to position touchless interactions successfully in various domains (e.g., entertainment or surgery) because they are deemed natural. This naturalness, however, depends upon the mechanics of touchless interaction that remains largely unexplored. My dissertation first aims to deconstruct the interaction mechanics of touchless, especially its device-less property, from an embodied perspective. Grounded in this analysis, I then plan to investigate how visual perception affects touchless interaction with distant, 2D displays. Preliminary findings suggest that Gestalt principles in visual perception and motor action affect the touchless user experience. User interface elements demonstrating perceptual-grouping principles, such as similarity of orientation decreased users’ efficiency, while continuity of UI elements forming a perceptual whole increased users’ effectiveness. Moreover, following the law of Prägnanz, users often gestured to minimize their energy expenditure. This work can inform the design of touchless UX by uncovering relations between perceptual and motor gestalt in touchless interactions.

Understanding Visual Feedback in Large-Display Touchless Interactions: An Exploratory Study

Chattopadhyay, D., & Bolchini, D.
Technical Report IUPUI Scholar Works, Indiana University.

November 2014

image

Abstract

Touchless interactions synthesize input and output from physically disconnected motor and display spaces without any haptic feedback. In the absence of any haptic feedback, touchless interactions primarily rely on visual cues, but properties of visual feedback remain unexplored. This paper systematically investigates how large-display touchless interactions are affected by (1) types of visual feedback—discrete, partial, and continuous; (2) alternative forms of touchless cursors; (3) approaches to visualize target-selection; and (4) persistent visual cues to support out-of-range and drag-and-drop gestures. Results suggest that continuous was more effective than partial visual feedback; users disliked opaque cursors, and efficiency did not increase when cursors were larger than display artifacts’ size. Semantic visual feedback located at the display border improved users’ efficiency to return within the display range; however, the path of movement echoed in drag-and-drop operations decreased efficiency. Our findings contribute key ingredients to design suitable visual feedback for large-display touchless environments.

Holes, Pits, and Valleys: Guiding Large-Display Touchless Interactions with Data-Morphed Topographies

Chattopadhyay, D., Achmiz, S., Saxena, S., Bansal, M., Bolchini, D., & Voida, S.
Extended AbstractProceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, 19–22, ACM.

September 2014

image

Abstract

Large, high-resolution displays enable efficient visualization of large datasets. To interact with these large datasets, touchless interfaces can support fluid interaction at different distances from the display. Touchless gestures, however, lack haptic feedback. Hence, users' gestures may unintentionally move off the interface elements and require additional physical effort to perform intended actions. To address this problem, we propose data-morphed topographies for touchless interactions: constraints on users' cursor movements that guide touchless interaction along the structure of the visualized data. To exemplify the potential of our concept, we envision applying three data-morphed topographies—holes, pits, and valleys—to common problem-solving tasks in visual analytics.

Touchless Circular Menus: Toward an Intuitive UI for Touchless Interactions with Large Displays

Chattopadhyay, D., & Bolchini, D.
Conference Paper Proceedings of the International Working Conference on Advanced Visual Interfaces, 33–40, ACM.

May 2014

image

Abstract

Researchers are exploring touchless interactions in diverse usage contexts. These include interacting with public displays, where mouse and keyboards are inconvenient, activating kitchen devices without touching them with dirty hands, or supporting surgeons in browsing medical images in a sterile operating room. Unlike traditional visual interfaces, however, touchless systems still lack a standardized user interface language for basic command selection (e.g., menus). Prior research proposed touchless menus that require users to comply strictly with system-defined postures (e.g., grab, finger-count, pinch). These approaches are problematic because they are analogous to command-line interfaces: users need to remember an interaction vocabulary and input a pre-defined symbol (via gesture or command). To overcome this problem, we introduce and evaluate Touchless Circular Menus (TCM)—a touchless menu system optimized for large displays, which enables users to make simple directional movements for selecting commands. TCM utilize our abilities to make mid-air directional strokes, relieve users from learning posture-based commands, and shift the interaction complexity from users’ input to the visual interface. In a controlled study (N=15), when compared with contextual linear menus using grab gestures, participants using TCM were more than two times faster in selecting commands and perceived lower workload. However, users made more command-selection errors with TCM than with linear menus. The menu’s triggering location on the visual interface significantly affected the effectiveness and efficiency of TCM. Our contribution informs the design of intuitive UIs for touchless interactions with large displays.

A ‘Stopper’ Metaphor for Persistent Visual Feedback in Touchless Interactions with Wall-Sized Displays

Chattopadhyay, D., Pan, W., & Bolchini, D.
Extended Abstract International Symposium on Pervasive Displays (PerDiS), Mountain View, California, USA.

June 2013

Abstract

To interact with wall-sized displays (WSD) from a five-to-ten feet distance, users can leverage touchless gestures tracked by depth sensors such as Microsoft’s Kinect®. Yet when user’s gestures inadvertently land outside the WSD range, no visual feedback appears on the screen. This leaves users to wonder what happened, and slows down their actions. To combat this problem, we introduce Stoppers, a subtle visual cue that appears at the gesture’s last exit position informing the users that their gestures are off the WSD range, but being still tracked by sensors. In a 18- participant study investigating touchless selection tasks on an ultra-large 15.3M pixel WSD, introducing Stoppers made users twice as fast in getting their gesture back within the display range. Users reported Stoppers as intuitive, non-distracting and an easy-to-use visual guide. By providing persistent visual feedback, Stoppers show promise as a key ingredient to enhance fundamental mechanisms of user interaction in a broad range of touchless environments.

Laid-Back, Touchless Collaboration around Wall-size Displays: Visual Feedback and Affordances

Chattopadhyay, D., & Bolchini, D.
Extended Abstract Position paper at the International Workshop on Interactive, Ultra-High-Resolution Displays (POWERWALL), CHI, Paris, France.

May 2013

Abstract

To facilitate interaction and collaboration around ultra high-resolution, Wall-Size Displays (WSD), post-WIMP interaction modes like touchless and multi-touch have opened up new, unprecedented opportunities. Yet to fully harness this potential, we still need to understand fundamental design factors for successful WSD experiences. Some of these include visual feedback for touchless interactions, novel interface affordances for at-a-distance, high-bandwidth input, and the techno-social ingredients supporting laid-back, relaxed collaboration around WSDs. This position paper highlights our progress in a long-term research program that examines these issues and spurs new, exciting research directions. We recently completed a study aimed at investigating the properties of visual feedback in touchless WSD interaction, and we discuss some of our findings here. Our work exemplifies how research in WSD interaction calls for re-conceptualizing basic, first principles of Human-Computer Interaction (HCI) to pioneer a suite of next-generation interaction environments.