Now Reading
How ML Is Changing The Way We Use Touchscreens

How ML Is Changing The Way We Use Touchscreens

Ram Sagar
W3Schools

The touchscreens which we use at supermarkets and ATMs were accidentally invented by a group of atomic physicists back in 1970. The conception of touchscreens can be traced back to the 1940s, even before science fiction writers warmed up to the innovation. Today, the use of touchscreens is only bounded by the creativity of users. You can pinch, zoom, type and move the world literally with your fingers. However, a typical user might have had the experience of typos, unwanted clicks and many other mishits, which couldn’t be undone. So, what if the touchscreens are made smart enough to understand the touch better?

The researchers at Google used state-of-the-art machine learning models and designed an intelligent touchscreen for their Pixel phones.

How Touch Works

via Google AI

As illustrated above, when a finger interacts with a touch sensor cell, it disturbs the charge from the projected field around two electrodes. A capacitive touch sensor is a collection of electrodes arranged in rows and columns, which are separated by a dielectric. 



Capacitive touch sensors don’t respond to changes in force but are tuned to be highly sensitive to changes in distance within a couple of millimetres above the display. That is, a finger contact on the display glass should saturate the sensor near its centre, but will retain a high dynamic range around the perimeter of the finger’s contact (where the finger curls up).

When the soft tissue of the finger touches the screen, it deforms and spreads out. The nature of this spread depends on the size and shape of the user’s finger, and its angle to the screen. This is also a dynamic change that occurs over some period of time, which differentiates it from contacts that have a long duration or a large area.

Beyond tapping and swiping gestures, long-pressing has been the main alternative path for interaction. For mobile to understand a long press, the user’s finger must remain stationary for 400–500 ms. This delay is not bad, but neither is it immediate, which can ruin the user experience for some. An alternative to time threshold based interaction is through force estimation. But, there are other challenges such as differentiating between soft and firm touch, which in turn requires hardware sensors. 

What ML Got To Do With Touch

via Google AI

Sensing touch force requires dedicated hardware sensors that are expensive to design and integrate. According to Google, touch force is difficult for people to control, so most practical force-based interactions focus on discrete levels of force (e.g., soft vs. firm touch) that do not require the full capabilities of a hardware force sensor.

The differences between users (and fingers) make it difficult to encode these observations with heuristic rules. Google, therefore, designed a machine learning solution that would allow us to learn these features and their variances directly from user interaction samples.

via Google AI

The researchers at Google designed a neural network that combines convolutional (CNN) and recurrent (RNN) components. The CNN takes care of the spatial features, while the RNN is used for tracking temporal development and provides a consistent runtime experience. 

See Also

Each frame is processed by the network as it is received from the touch sensor, and the RNN state vectors are preserved between frames (rather than processing them in batches). The network was intentionally kept simple to minimise on-device inference costs when running concurrently with other applications taking approximately 50 µs of processing per frame and less than 1 MB of memory using TensorFlow Lite.

Through this integration of machine-learning algorithms and careful interaction design, Google researchers were able to deliver a more expressive touch experience for Pixel users and plan to explore new forms of touch interaction.

The applications of touchscreens now extend far beyond your typical smartphone usage. Touchscreens are now being used in hospitals, autonomous cars and even in spacecraft. These are few of the critical scenarios where the risk is too high in case of a mishit. Having an intelligent surface that knows what we want just from sensing the force of our finger can come in quite handy as even the AutoML tools are transitioning towards touch-based drag and drop analytics.

Know more here.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top