At a time when automotive manufacturers are exploring autonomous technology, Nissan is taking brain-computer interface forward with its brain-to-vehicle (B2V) technology. This cutting-edge automation enables drivers to control their cars using their thoughts. Showcased at the CES trade show in Las Vegas in January this year, this technology decodes the driver’s brainwave activity through a device that is worn on his/her head.
Nissan’s executive vice president Daniele Schillaci said, “When most people think about autonomous driving, they have a very impersonal vision of the future, where humans give up control to the machines. But B2V technology does the opposite by using signals from their own brain to make the drive even more exciting and enjoyable.”
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Inside The Brain-To-Vehicle Technology
Nissan has been at the forefront of intelligent mobility, allowing people to take control of their vehicles in a more controlled way. B2V technology works by collecting data in the manual mode and the car’s autonomous system is also adapted to match the user’s unique personal driving style. The Japanese automaker also wants to deliver more autonomy and more connectivity. It also allows drivers to predict when and how the driver will initiate an action — for example, turning the wheel or pressing the brake — and starts the action before them.
This is reportedly the first-of-its-kind technology where the driver wears a device that measures brainwave activity, which is then analysed by autonomous systems. For example, by slowing the car from 0.2 to 0.5 seconds faster than the driver, the interference by the machine remains largely imperceptible.
In a video released by Nissan, the researchers demonstrated how a person wearing this headset and driving on a simulator could have their movements anticipated by monitoring their brain signals.
B2V is a product of four years of research done in collaboration with scientists from CNBI who worked on the brain-machine interface and delivered it to their industry partner Nissan. The result was a technology which was integrated into a prototype and manifested in the interface that enables the vehicle to communicate with the driver.
Mind Over Motor
The researchers added an eye tracker and studied brain signals that indicate how certain objects need to be factored in by the vehicle. This motor research is part of a larger research developed to build scientific knowledge around using brain signals to control objects and the environment. Brain-computer interface has been an active area of research and researchers at Boston University and MIT have used this signal to teach robots how to sharpen sorting decisions. In a similar vein, in autonomous driving, brain signals are used to fine-tune the driving behaviour.
To this end, researchers developed algorithms that mimic the neurons in a brain and can learn from actions. Over a period of time, these algorithms will be trained to identify objects that are in the car’s surroundings and can segment fast-moving object as pedestrians, cars, obstacles or traffic lights. And once the decision has been taken, it then sends electronic commands to controllers to turn the steering wheel or control the brakes.
Despite continuous advancement, researchers assert that one cannot completely trust the algorithms which are dubbed as the car’s “brain”. Policymakers and even the engineers do not fully trust the programs and believe it can assess all the situations a car is in during traffic. Also, researchers require more data to be collected to understand and train ML algorithms efficiently.
Therefore, self- driving cars are still closely supervised, despite a slew of announcements from major automakers and there is always a safety driver present to take over in case the car goes out of control. Interestingly, Nissan’s B2V technology, the world’s first tech to predict driver’s actions and thereby perform 0.2 to 0.5 seconds faster will take from 5-10 years to be put into production.