Last week, the AI enthusiasts were served with two of the biggest goof-ups on live TV that left many in splits. These two events, though funny, forced people to reassess their opinion about the future of artificial intelligence. In the next section, we take a look at what happens behind the scenes.
AI Hits The Wall, Actually
Roborace is the world first driver-less/autonomous motorsports category.
— Ryan (@dogryan100) October 29, 2020
This is one of their first live-broadcasted events.
This was the second run.
It drove straight into a wall. pic.twitter.com/ss5R2YVRi3
Roborace dubbed as Motorsport 2.0, partnered with the likes of Acronis, Nvidia, Arrival and Michelin to launch the first of its kind racing event. This historical event was supposed to encourage innovators to come up with workable self-driving solutions and show the world state of the art.
However, things didn’t go as planned. The live stream of the event captures Acronis’ vehicle driving straight into a wall surprising its engineers and leaving the viewers in splits. The botched up was addressed by one of the engineers in a Reddit post.
“The actual failure happened way before the moment of the crash, on the initialisation lap,” wrote one of the engineers.
The engineer explained that during the initialisation lap, something happened which apparently caused the steering control signal to go to NaN(not a number) and subsequently the steering locked to the maximum value to the right.
“…the acceleration command went as normal but the steering was locked to the right.” The engineer further added that they looked at the log values and noticed that though the controller tried to steer the car back to the left, it didn’t work because of the steering lock. “The desired trajectory was also good, the car definitely did not plan to go into the wall,” explained the engineer.
The forum users didn’t find the response outright convincing. They started probing into what really could have gone wrong. “NaN occurrence is a symptom and not a cause,” said Sanjeev Sharma, founder of the self-driving company, Swaayatt Robots. “NaN is a symptom of some part of the code failing to do its job properly and not the problem in itself,” he added.
Acknowledging the queries regarding if there was a checkpoint to counter consequences of NaN, the engineer wrote that they do have failure modes system in place, and the intended scenario is to put the car into emergency braking once one of the systems becomes nonfunctional or stops producing any output.
Furthermore, the Reddit users were surprised to learn that the Acronis team was using Matlab for this race. However, it is not clear to what extent Matlab was used. There is nothing wrong with using MATLAB, but practitioners rarely use MATLAB for deployment. MATLAB’s usage is mostly restricted to research where you want to prove an idea and don’t care about run-time.
People who are familiar with building self-driving systems opined that this could have been any high-level process that failed and that translated in the form of NaN to low-level controllers. Or it could be just some value exception or some bad exception handling code.
Though the team Acronis admitted that they did not actually know the exact cause, they reasoned that it would be a case of a short spike in inputs to the controller, which is a very rare event. Normally, such spikes are filtered out, but due to some configuration, it sneaked into the system. “We had testing days before and had never experienced this,” wrote the engineer.
When it comes to self-driving cars, we can never say 99% is enough. We just can’t afford any outlier event. This is one of the reasons fully autonomous cars aren’t yet worldwide phenomena. However, the climax went well for the team as they took the second spot in the final standings.
AI Camera Mistakes Referee For Ball
Scottish team Inverness Caledonian Thistle Football Club had deployed an AI camera to live-stream their match against Ayr United.
The camera is programmed to follow the ball throughout the match; instead, the camera started to pan towards the linesman who is standing away from the action. The linesman was sporting a bald look which, according to reports, might have confused the algorithm for a football. The commentators had to apologise for this goof-up as the viewers were shown the linesman waving the flag for most of the time.
AI-powered 4K camera installations are helping smaller football clubs in cutting the cost. All one has to do is mount the camera on the tripod it ships, and it is good to go. These cameras can pan, zoom and cover hours of footage in 4K. But, when these cameras mistake bald heads for footballs, then one better have a rookie cameraman operating the gear. AI sceptics might even extend this error to computer vision in general. If widely abundant data like balls and human heads can confuse algorithms, how will they behave when they encounter an anomaly, a cancer cell, they might argue.
As much as we would like to see AI succeed in mundane human activities, events like the mentioned above hints that AI is far away not to require human supervision. Successes like GPT-3 and AlphaGo do give us an impression that AGI is within an arm’s reach. However, deploying it in the real world opens up a new can of worms for AI quite often. Algorithms for building systems that predict your next festival purchase can be flawed, but a steering lock on a busy street cannot be excused.