Computer Science Thesis Proposal

Tuesday, June 6, 2017 - 10:30am to 11:30am


8102 Gates Hillman Centers


SE-JOON CHUNG, Ph.D. Student

Use of hands is the primary way we interact with the world around us. Recent trends in virtual reality (VR) also reflect the importance of interaction with hands. Mainstream virtual reality headsets such as Oculus Rift, HTC Vive, and the Playstation VR all support and encourage the use of their hand tracking controllers. However, tracking hands is very challenging due to their small size and various occlusions. For example, significant portions of the hand can get occluded when people are holding hands together or holding an object. For this reason, the aforementioned companies let their users hold controllers that are more reliably tracked than tracking hands directly. Another problem of hand tracking is that it often adds latency to the system. Furthermore, networked multiplayer interactions are even more challenging to deliver without users noticing delays due to the addition of network delays. We can work around this problem by trying to predict future hand motions using gaze. Eye tracking will be ubiquitous in the next generation of virtual reality (VR) headsets, because of its numerous benefits to VR such as foveated rendering, object selection using gaze, and virtual social interactions. However, using the well-known importance of hand-eye coordination during object manipulations to predict hand motion is a relatively unexplored topic. In this proposal, we propose ways to overcome the current limitations of hand use in VR by addressing the aforementioned challenges. To address difficulty of hand tracking, we present a way to estimate the entire hand pose given a few reliably tracked points on the hand. To address the latency in hand tracking and multiplayer interactions, we propose a method to predict hand pose using gaze which will be commonly available in the next generation of VR headsets. We will first motivate our approach by presenting our observations of hand-eye coordination. Then, we present a study on predicting grasp types to show that gaze is effective in predicting hand interactions. Finally, we end the proposal with plans for completing the hand pose prediction framework. Thesis Committee: Nancy Pollard (Chair) Kris Kitani Katerina Fragkiadaki Carol O'Sullivan (Trinity College Dublin) Copy of Thesis Summary

For More Information, Contact:


Thesis Proposal