Seattle EXCOM and lecture Surface Hub + Kinect: Interaction Beyond Touch and Collaboration Beyond Video Chat

Please join our October  lecture and EXCOM

Time: Oct 13th, 2015

6 pm Lecture

7 Pm EXCOM

Place : Microsoft Research Building 99 , room 1919
14820 Northeast 36th Street
Redmond, WA 98052 USA

Speaker : Dr. Zhengyou Zhang , Microsoft Research,  IEEE fellow

Title: Surface Hub + Kinect: Interaction Beyond Touch and Collaboration Beyond Video Chat

 

Abstract: Large displays are becoming commodity, and more and more, they are touch-enabled. In this talk, we describe a system called ViiBoard, Vision-enhanced Immersive Interaction with Touch Board, by adding a Kinect (depth sensor) next to Surface Hub (large touch display). Please see attached details. It consists of two parts.

The first part is called VTouch that augments touch input with visual understanding of the user to improve interaction with a large touch-sensitive display such as Microsoft Surface Hub. A commodity color plus depth sensor such as Microsoft Kinect adds the visual modality and enables new interactions beyond touch. Through visual analysis, the system understands where the user is, who the user is, and what the user is doing even before the user touches the display. Such information is used to enhance interaction in multiple ways. For example, a user can use simple gestures to bring up menu items such as color palette and soft keyboard; menu items can be shown where the user is and can follow the user; hovering can show information to the user before the user commits to touch; the user can perform different functions (for example writing and erasing) with different hands; and the user’s preference profile can be maintained, distinct from other users. User studies are conducted and the users very much appreciate the value of these and other enhanced interactions.

The second part is called ImmerseBoard. ImmerseBoard is a system for remote collaboration through a digital whiteboard that gives participants a 3D immersive experience, enabled only by an RGBD camera mounted on the side of a large touch display. Using 3D processing of the depth images, life-sized rendering, and novel visualizations, ImmerseBoard emulates writing side-by-side on a physical whiteboard, or alternatively on a mirror. User studies involving three tasks show that compared to standard video conferencing with a digital whiteboard, ImmerseBoard provides participants with a quantitatively better ability to estimate their remote partners’ eye gaze direction, gesture direction, intention, and level of agreement. Moreover, these quantitative capabilities translate qualitatively into a heightened sense of being together and a more enjoyable experience. ImmerseBoard’s form factor is suitable for practical and easy installation in homes and offices.

 

EXCOM

 

Chair, Dr. Sheree Wen

SWEN@ieee.org