Building the next generation of Shape Displays

Shape displays are devices that can mechanically deform their surface to render particular objects or shapes. If a regular display is made of an array of pixels, a shape display is made of an array of physical parts that can move.

Like Comment
Read the paper

Shape displays can be as big as to cover a whole desk, creating a deformable desk. Perhaps the next generation of desktops.

And with new desks comes a whole new set of work opportunities. A user may be able to sit and the desk be transformed to a personally fitted working environment. Such work environment can overcome many limitations of the less than perfect physical environments. Improving constrained spaces, enhancing existing hardware: such as presenting new keyboard and mice in context of applications, specially when paired with and HMD.  And they may even help level the plane field for users with existing physical limitations.

Traditionally, shape displays can do all of this and more, by motorizing a grid of independent actuators, the likes of the inForm and Transform prototypes from the MIT Media lab. However, covering a large area with pins has its cons (for example, inform used 900 independently motorized pins). It makes the device expensive and prone to mechanical failures. Furthermore, regardless of the number of pins, the surface resolution is limited by the need to spread the pins over a large area. As a results it is hard to render smooth surfaces such as those common in nature, topography or large mechanical artifacts And moving the hand over the pins exhibits the high frequency aliasing between the pins.

Next generation

Over the last year our team, Principal Researchers Mar Gonzalez-Franco, Eyal Ofek and Mike Sinclair from Microsoft Research, together with Prof. Anthony Steed from UCL, Prof. Ryo Suzuki from U Calgary, Prof. Daniel Leithinger from CU Boulder, and Eric Gonzalez from Stanford, has focused on reimagining the concept of shape displays and built 3 different prototypes with the main purpose that they can be more consumer friendly: cheaper to build and fix. This exploration has created a new found way towards the next generation of desktop and new forms of haptics for Augmented Reality and Virtual Reality.

Our main target of application looks for tactile rendering only wherever the user hand is. As a result, our shape displays do not really need to be as big as the earlier generations, but only as big as a user palm, or fingers. This can help to greatly reduce the mechanical complexity of such devices, but requires better real time tracking and prediction of users touch events. It also requires fast actuation to be able to render a smoothly changing geometry as the user’s hand moves and scans the object’s surface.

Shape Display prototypes based on auxetic materials (top), swarm robots (left) and 360 handheld controller (right).

In our Nature Communications paper we propose a new shape display that uses minimal number of mechanized actuators (9 actuators in the presented prototype). We develop an auxetic surface that interpolates between the actuators. The surface is flexible and stretches between the actuators, yet it resists any touch to give a feeling of a rigid surface. It is a unique merging of rigid robotic skeleton, and a deformable soft robotics. For its fabrication we start off a rigid and stiff material (a board of polyethylene, but could be a wood board) and through cut patterns we convert that rigid board into an auxetic material. This new material surface will maintain the structure and stiffness when needed but at the same time allow to be bending in all directions.

The user can put their hand on the device, and drag it over the desk as if it was a mouse. By tracking the device position and orientation and updating the surface it can give the illusion of a large surface. Furthermore, if the virtual object is a dynamic one, the shape display can add small motions to the rendering of the geometry, increasing the realism.

 

Swarm Robots

In another prototype, presented this week at UIST 2021, we envision HapticBots, a system of swarm robots that can reach the hand just in time for providing haptics. While each robot can render the surface touched by a single fingertip, having a swarm of fast moving and shape changing of them, can give the illusion of an existence of a large virtual surface. Each robot controls a small piece of surface, the size of 2-3 fingertips, equivalent to one pin of a shape display that can be moved around, rotated, raised, and lowered to encounter a fingertip, as it approaches virtual geometry. The small robots can be easily picked by the user and deployed to a different desk, making it into a haptic desk.

 

Beyond the desk: VR controller

On a third prototype, that has been awarded an Honorable Mention at UIST 2021, we envision X-Rings, a shape display embedded on a Virtual Reality controller. This handheld controller renders a 360 shape display under the palm of the user. With X-Rings, we bring the shape display outside the table onto the user’s hand. In essence creating a 3D shape changing device that can simulate different objects that the user may grasp within their hands.

We hope that our vision of shape displays for haptic desks and controllers will evolve and be part of the working environment once Spatial Computing becomes a more prevalent form of computing. We believe that the development of light, glass form-factor HMDs that can be used as working monitors, will bring a paradigm shift in which interaction with digital content will also mean new set of mobile haptic environments that can make the work more tangible through Shape Displays.

Mar Gonzalez-Franco

Principal Researcher, Microsoft Research

Dr. Mar Gonzalez-Franco is a neuroscientist and computer scientist at Microsoft Research, EPIC (Extended Perception Interaction and Cognition) team. In her research, she fosters new forms of interaction that will revolutionize how humans use technologies. Her interest lies in spatial computing, new devices, avatars, and perception.