Defining gestures for basic word processing tasks in wearable augmented reality


Alternative text - include a link to the PDF!

Related Projects


Creator/Artist: Kuldeep Singh Rathod

Category: Interaction Design

Document: P2 Project

Batch: 2017-2019

Source: India,   IDC IIT Bombay

Period:  2009-2018

Medium: Report pdf

Supervisor: Prof. Jayesh Pillai


Detailed Description

Augmented reality has taken the reins from existing technology by integrating digital data in real time and providing an interactive experience for users. There have been a lot of research and technological advancements on hardware and software fronts to make these interactions easy and intrinsic. If we look at the wearable devices for augmented reality, hardware developments have not been able to keep pace with software developments. This acts as a limiting factor for the user experience of these devices. Most of the basic activities of users on any technology devices include tasks like internet surfing, reading and writing emails, which require text input. The technology used by existing AR devices uses the point-and-click method, which has been adopted from traditional GUI systems. This method to provide text input tends to be tedious and time-consuming. This study has been conducted to identify and define the elements of appropriate gestures which will act as input for AR devices. The scope of the study is limited to word processing for specific tasks chosen beforehand. The experiment undertaken for this study is in the form of a user elicitation study which is taken as the base to define guidelines for designing the specified gestures.