logo

LITHO

Litho is a wearable input device primarily used for augmented reality (AR) applications. Litho is worn between the index and middle finger while a phone is held in the other hand to view the AR experience. This way Litho lets you intuitively manipulate virtual objects in space, making it an input device for the real world.

Company: Litho

London, UK
Jul 2019 - Nov 2019

Why & How

AR creates a composite view of computer generated images on a user’s view of the world. The technology still new with various AR glasses getting ready for market. AR has a lot of promise to enhance our perceived reality: by adding information where requested, or through simplifying complicated real-world tasks. For both industry and consumers AR is seen as the next way of computing, but how will we interact with objects that we can see but not touch? This is where Litho comes in.

Litho’s fundamental interaction is based on the notion of pointing at the object in space that the user wants to interact with. This is made possible by tracking the arm and wrist position in real space. The information is then used to create a laser pointer that is directed from the fingers that can be used to select the virtual object. For finer control, Litho has a 2D touch pad within reach of the thumb for scroll-like interactions. Complimentary to this, the controller provides precise haptic feedback for a truly multi-model experience. Have a look at the video above to learn more about how Litho is used.


A computer mouse for the real world

As you can imagine by now, Litho is quite different from anything else on the market. At its core, Litho is comparable with a PC mouse: both use gross motor-skills mapped to a pointer and have a form of linear input (think of the scroll wheel). Nevertheless, Litho works in 3D space and remains therefore quite unique. Hence the need for an interaction framework that is: device agnostic, scalable over various AR applications, and utilises Litho’s strengths. To bring this altogether in to a coherent experience, I designed a miniature city builder game to validate the newly designed interaction framework and its logic.


Research phase

As an initial research phase I gave Litho to a varieties group of people to let them try the standard demo app. This phase elicited fundamental pain points in the experience of which some were easily solvable some were not.
For example Litho requires a particular stance for the tracking to be optimal. In addition, Litho was often worn the wrong way around. This is due its form, which provides no directional bias -meaning it has very few visual cues of what is forward or backward. As a consequence the calibration sequence would often fail, making Litho unusable.
Since these steps at the time could not be simplified, a minimal 2D-UI was created with a visual on-boarding sequence to quickly teach new users the steps required to setup Litho and scan their AR environment correctly. At the end of this phase there were some apparent questions regarding Litho’s spatial UI which will be described in detail in the following:


How to spawn objects?

Since Litho works in 3D space I decided to stay close to physical interactions for inspiration, a books shelf analogy seemed to make sense. The user can select and grab the object they want from the selves and place it in the world. After various iterations I settled with a concept called the “belt menu” that contains a number of shelves. The menu’s name comes from the way it presents itself.
Shelves are floating around the user and animate from the hips onwards as if the shelves are worn like a belt wherever you go, hence the belt reference. With a twist of the wrist, the menu appears in view of the camera and can be hidden with yet another twist. Via a swipe gesture on the touch pad, the user cycles through the different shelves holing various virtual items.

How to manipulate objects?

What is the best way to manipulate these objects? As a start, I looked into the following primary actions: moving, rotating, and scaling. I tested a variety of interactions: selecting the action through swiping on the touch pad, separate actions via virtual buttons, and more. It became clear that moving and rotating was easy using the touchpad but that there was a need for more specific actions to be selected depending on the object’s context.


How to transform objects?

Specific objects require specific actions. Litho is a one-handed input device so to present the user with a list of object-specific actions a contextual menu structure was developed.
The menu is comparable with a right mouse click that presents new options regarding the icon that was selected. Since this is 3D space, extra attention was given to: the menu’s size which depends on the distance from the viewer, the position of the menu, and button behaviour and representation. When interacting with the contextual menu the pointer would change to a reticle indicating a different form of interaction and prevent mode confusion.
The menu opens when pointing at an object while tapping the touch pad. The menu is circular to quickly access actions and offers the user to develop muscle memory over time. Also sub-menus were supported for added versatility of the menu structure.


Validation & Reflection

The framework proves successful in in the case of a city builder game. But how does it perform in other applications? To proof scalability I created two more apps using the framework’s principles: a furniture placement app and an industrial annotation app .
Although the framework and its implementation is perceived well, there are some elements that would need more exploration. The twist gesture to open the menu is easy and quickly learned which indicated more gestures could benefit the framework. Having a home-button or back-button gesture would be beneficial when the user is unsure what they are interacting with; this happens often in AR since not all object are always in the field of view. Additionally, haptic feedback needs further investigation, it enriches the experience and can provide unobtrusive information via a non-visual channel. Similar to haptics, audio cues have to been a part of this framework and could enrich and free up screen space for a more simplistic and harmonious experience.