The developer community is aware of Google’s Cloud TPU that is powering breakthroughs in data centres across the globe. Tailored for machine learning training and inference, these accelerators custom-designed for handling specific workloads work better than GPU/CPUs in data centres.
Now, as the demand for training at the edge rises, we have seen a considerable growth spurt in edge computing with leading manufacturers like Google with Coral, Intel with Neural Compute Stick and Nvidia (Jetson Nano) expanding their offerings with a new wave of low-cost, low-powered edge computing devices that can speed up AI use cases at the edge.
This new market has led to new demands and hardware companies are seizing the $200 billion opportunity in hardware in the next five to seven years. A new research suggests that there are more than 100 edge computing use cases across 11 sectors that can potentially generate more than $200 billion in hardware value in the next five to seven years.
Overall, edge computing represents a potential value of $175 billion to $215 billion in hardware by 2025. So far, the demand for edge computing is fuelled by the rise of autonomous vehicles, drones and IoT devices. However, as the industry matures, we will see diverse use cases that will create a paradigm shift in terms of how the devices are optimised for AI training and inference.
Google Takes A Different Approach To Hardware
Google is clearly looking to have a leading role in edge computing. In March this year, they even launched the Coral Dev Board, a lightweight PC outfitted with a Tensor Processing Unit (edge TPU) and a small ASIC that provides high-performance ML inferencing for low-power devices. Coral Dev Board can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power-efficient manner. Given the focus on computer vision use cases, Coral Dev Board makes prototyping computer vision applications easier with a Camera that connects to the Dev Board over a MIPI interface.
Since the launch in March, there have been several updates to the kit. The latest breakthrough came this week when Google announced EfficientNet-EdgeTPU, a family of image classification models derived from EfficientNets — optimised to run on the Coral boards’ system-on-modules.
The big news is that Google has taken a departure from the current industry trend of building domain-specific architectures or hardware accelerators that are built for specific neural network acceleration. While industry giants are following the traditional approach of building customised hardware customised for accelerating ML models -- Google seeing the demand for computer vision tasks soaring optimised models for hardware instead. The latest development will help EfficientNet-EdgeTPU edge out the competition and provide an end-to-end environment, hardware and software tools to train and run neural networks and take ideas to the production stage.
By leveraging the power of AutoML to customise EfficientNets for Edge TPU, developers can achieve state-of-the-art accuracy in image classification tasks and at the same time reduce the model size and computational complexity. In short, as one ML researcher puts it — AutoML + EdgeTPU + Model optimisation leads to better latency and accuracy.
One of the biggest takeaways is clearly that Google wants to lead in the edge computing market. By leveraging its heft with AutoML, the researchers have taken a new approach by optimising neural networks that run on hardware. As Google researchers Suyog Gupta and Mingxing Tan sum it, “While there has been a steady proliferation of architectures in data centres and on edge computing platforms, the NNs that run on them are rarely customised to take advantage of the underlying hardware. From past experience, we know that Edge TPU’s power efficiency and performance tend to be maximised when the model fits within its on-chip memory”. Model customisations help developers get accuracies at par with larger, compute-heavy models in data centres. EfficientNet-EdgeTPU, customised for mobile accelerators provides better accuracy and 10x faster inference speed.
Google has always been about empowering developers and it changed the landscape with developer tools like TensorFlow and AutoML. Low-cost devices with optimised neural networks can open it up to a larger developer audience. In a first, an accelerator optimised neural networks with AutoML help reduce the complexity further and take it to a larger community and even large-scale companies where it can be deployed across sectors like transport, logistics, retail, global energy and public sector. The hardware can be deployed in drones to enable monitoring requirements in mining, agriculture and defence. In terms of agriculture, edge devices can be used for location tracking of livestock.
Register for our upcoming events:
- WEBINAR: HOW TO BEGIN A CAREER IN DATA SCIENCE | 24th Oct
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Our annual ranking of Artificial Intelligence Programs in India for 2019 is out. Check here.
Provide your comments below
What's Your Reaction?
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.