Why AR needs a Litho
AR creates a composite view of computer generated images on a user’s view of the world. AR has a lot of promise to enhance our perceived reality: by adding information where requested, or through simplifying complicated real-world tasks. For both industry and consumers AR is seen as the next way of computing, but how will we interact with objects that we can see but not touch? This is where Litho comes in.
Litho tracks your arm for you to point and interact with the virtual world. It has a touchpad for scrolling and gives you haptic feedback to enrich the interaction. Combined Litho creates an intuitive interaction that bridges the real and the virtual - making it a “mouse for the real world”.
Here you see Ben opening a menu using a flick of the wrist
Litho is like no other input device. It therefore needed an interaction framework that other developers could use and build on. It needed to be: device agnostic, scalable over various AR applications, and utilises Litho’s strengths. To bring this all together into a coherent experience I spend a large part of my efforts into designing and validating. The new framework was tested in various apps: a City Builder game, a virtual furniture demo and an industrial annotation demo.
The framework uses contextual menus that are attached to objects in the world. Besides menus these objects also have properties such as physics that need to react with Litho. The main menu is largely based on a ‘bookshelf’ analogy where the user can grab virtual objects and place them in the real world.
Working at Litho made me rethink the status quo around UI design. Although a 3D interface might look intriguing, it doesn’t automatically make it better.