If we look back in time, a few decades ago, touchscreens were the latest innovative technology, yet clunky pieces of technology reserved for applications that did little more than show off that touchscreens were possible.
Nowadays, it’s secure to say that touchscreens are accepted as an easy way to interact with an operating system, they bring accuracy, they support multi-touch and are user-friendly, giving you the option to operate in multiple dimensions.
The perfect touchscreen is yet to be invented and engineers and researchers are already speculating which will be the next phase of UI development, especially for robotic control and the leading candidate for this next step is gesture-based control.
The major limitation for touchscreens is the fact they operate only in two dimensions and the third dimension can only be introduced with force touch which is extremely limited, comparatively to hand gestures and physical movements. It can be more natural than swiping and tapping a smartphone screen.
This demand for three-dimensional gestural control is increasing with the demand for virtual reality and augmented reality technology. The digital world we experience nowadays will move from two dimensions to three dimensions and this evolution needs controls that can manage those experiences.
The sophistication of these types of systems could provide better feedback to the users in control, which is better than merely responding with visual feedback like movement or lights. Users can be immersed with physical feedback like vibration or resistance.
In the present, what we have is one of the most advanced gestures systems in Real-Time 3D Gesture Analysis for Natural Interaction with Smart Devices, a project headed by researchers in Linnaeus University in Sweden. It only lacks of simpler forms of gesture-based control, which are available. And there’s a project that uses Arduino to create a robot that can respond to four different hand gestures (also, a neutral position but still simple).
To match these projects we also have iPhone technology which already makes users use some simple gestures, such as rotating the phone to experience the 360-degree view of an image or video.
A few complications are still preventing gestures from being the next landmark in robotics control, such as:
Gesture complexity, which is the biggest difficulty described by researchers at Linnaeus University and to be effective. Their recognition system needs to be able to gather and recognize thousands of tiny data points and interpret complex patterns in order to catch the movement to be done. This big data-bated needs to happen in real-time-especially for applications like live robotics control, or engagement in a VR-based video game. It demands a processor that can operate quickly.
Accessibility, comes from a gesture recognition system that would be developed to accommodate a standard human model, such as right human hand. How would the system accommodate somebody whose hand was missing or somebody with a Parkinson’s disease?
Applications, for gestures to be recognized as a mainstream way to interact with robotics and virtual, these features need to be available. VR technology has been advancing strongly for the past several years and is poised to take a big step forward in user adoption by 2020, using simple gestures such as hand movements and other physical gestures in order to control a two dimensional screen, for example.
Gestures seem to be the best form of control for the digital devices, robotics and VR systems of the future but there are still many difficulties to overcome before we head to large-scale adoption.
Be ready! Researchers are ahead of the curve and already preparing the intelligent gesture-based recognition programs because we need touchscreens to become obsolete.