Listen to this story
The technology for autonomous vehicles has been around for a while, and major automakers and tech companies worldwide have invested billions of dollars in making it a reality. But according to industry analysts, it will be years before the automotive industry evolves to the point where most driving conditions can be handled by vehicles entirely independently without human intervention.
Real-life situations, including making split-second decisions, dealing with quickly changing weather, and being able to see another motorist at a crosswalk, are best left to an attentive driver. Technology may be very useful; in some cases, when used appropriately, some of the modern automobile assist systems can even save lives. But driving is challenging; there are many types of roads, lanes, and weather conditions, so taking the same course of action is only sometimes the best.
According to the first thorough report of its kind from NHTSA (National Highway Traffic Safety Administration), who have collected crash reports from AV manufacturers between July 2021 and May 2022 reported that this essential data would be required for research and formulation policies to increase the safety of these technologies. Furthermore, according to the critics of these programmers, businesses are advancing autonomous driving technology at the expense of public safety. For example, Tesla was pressured into discontinuing using an “assertive” self-driving mode that permitted its vehicles to pass through stop signs without coming to a complete halt.
In the end, the safety and the ability of an AV to function smoothly boils down to the perception kit installed in the car. Perception systems use different sensor modalities to build a model of the environment around the vehicle.
To improve the vision of far-away objects, perception systems have also started to implement novel edge hardware. All these vehicles are also ensured to fall in with new ISO standards for the safe operation of AVs. Perception systems also incorporate new technologies like 4D radar-on-chip digital imaging that is touted to help assist automated mobility. Research has also found that introducing heterogeneous computing platforms will contribute to the early realisation of autonomous driving. Within the context of autonomous vehicles, design transformation to data-driven vehicles is related to the concept of digital twins. Digital twins for AVs will build on a standard framework to remove safety challenges in AVs.
Fig: 3D Lidar
The available sensors mounted on the AV determine its capacity to identify and safely navigate around other vehicles, cyclists, pedestrians, and any other potentially dangerous obstructions on the road. Lidar sensors have great promise for helping autonomous vehicles navigate and view the world with incredibly precise accuracy. Lidar sensors produce a three-dimensional map of a target region by scanning an optical beam to gather spatial data. By offering high-resolution and unambiguous ranging and velocity information throughout day and night, they supplement cameras and radar.
Fig: Point Cloud
But, before this modern technology can be widely adopted, it must first be corrected for a number of limitations.
The challenges facing the lidar industry:
Listed below are the challenges of slower adoption of 3D Lidar in the Automotive industry:
- Cost has to drop down two orders of magnitude from the current price.
- Automotive Grade Reliability, including ingress, impact, heat, shock, and vibration, to guarantee peak performance under a variety of climatic and road situations. According to the standards set by the Automotive Electronics Council, a global alliance of automotive electronics companies, any workable lidar solution must also show multi-year reliability.
- In the case of Long-Range Lidars, the Range should be greater than 150 metres at above 8% object reflectivity.
- Vertical FoV should be greater than 45 degrees for situational awareness around the vehicle.
While many commercial lidar systems can meet some of these criteria, they must still fulfil all the requirements.
Give way to Solid-State Lidar
In 2018, solid-state LIDARS debuted and swiftly rose to prominence. Solid-state sensors can enhance sensor range by more than 200 metres while reducing costs by more than ten times. Therefore, looking through the existing technology to comprehend Solid State Lidars’ advantages is initially helpful. Until now, autonomous vehicles and systems have mainly depended on lidar, which steers an optical beam using moving parts. The most widely used lidar designs include several lasers, optics, electronics, and detectors mounted on a stage that rotates mechanically. However, assembling and aligning all these components result in expensive costs and only modest manufacturing quantities, and the wear and tear of the mechanical parts raise concerns about their long-term dependability.
According to a Tracxn report, 65 AV companies use LIDAR. These tens of thousands of unit-produced lidar systems have advanced the field of autonomous vehicles, but they are not appropriate for widespread lidar deployment. Due to these factors, there is a strong movement towards the elimination of mechanical components and towards compact designs that are more dependable, can be produced in greater quantities, and have a lower cost per unit.
The demand for solid-state lidar is expected to grow at a CAGR of 30.66% over the forecast period of 2021-26. [Reference: https://www.researchandmarkets.com/reports/5238674/solid-state-lidar-market-forecasts-from-2021-to.]
Automotive brands like Velodyne (now Velodyne + Ouster) and tech companies like Luminar & Xenomatix are advancing the state-of-the-art in LIDAR research. With OEMs like Mercedes Benz entering into deeper partnerships in the Lidar space, the future of L3 & L4 autonomy looks more real.
New research points to add on features to the Solid-State Lidar, such as;
Velocity – the Fourth Component to the 3D Lidar Data
Lidar from Aeva provides per-point velocity data for six degrees of freedom in real-time ego vehicle motion estimate, motion correction, and online extrinsic sensor calibration to help with sensor fusion. Autonomous navigation in GPS-denied and featureless environments like tunnels is made possible by these vehicles estimating characteristics, enabling precise vehicle placement and navigation without needing extra sensors, such as IMU or GPS.
Even if commercially accessible, fully autonomous vehicles are still years away and lowering the cost of LIDAR can spur development. In addition, several difficulties, from safety to smarter navigation, can be resolved with LIDAR sensors.
“As autonomous vehicles continue to develop from functional to dependable and safe, Ignitarium’s Automotive & Mobility Group is working on making reliable implementations a reality. With our core background in the Automotive chip design domain, we are able to bring architecture and design expertise, enabling customers to develop leading-edge 3D and 4D sensor HW. In the automotive SW areas, we work closely with Tier1/Tier2 suppliers and automotive semiconductor companies to develop technologies that blend Sensors, AI accelerator hardware, Functional Safety, optimised software pipelines, and create perception solutions,” said Pradeep Sukumaran, VP – Analytics & Digital Services at Ignitarium. He adds, “Our LiDAR software expertise goes beyond automotive, to include survey-graded mapping that includes SLAM and post-processing, re-localisation in dynamic and GPS-denied environments, and deep learning model optimisation for real-time performance on the edge.”