When edge computing is merged with machine learning, we get edge intelligence. As the name suggests, it is a domain that deals with leveraging intelligence/insights acquired through data at a local level.
According to Cisco’s forecast, there will be 850 ZB of data generated by mobile users and IoT devices by 2021. With increasing volume, several challenges like latency surface with regard to centralized cloud deployments will emerge. For real-time applications, the functioning should be more local, and that is where edge computing comes into the picture.
The objective of edge intelligence is to improve the quality and speed of data processing, while safeguarding the privacy of data. Moreover, the advantages of edge computing can compensate for the lack thereof in AI-based applications, and hence it came into existence.
Edge computing is mainly tapped for the following advantages:
- Low latency
- Low energy consumption and
- Scalability, and
In the next section, we briefly discuss the critical components of edge intelligence, and the kind of services available today.
Components Of Edge Intelligence
Edge computing is typically a virtual computing platform that provides services such as computing, storage, and networking between end devices and the server, which is on the cloud.
Taking advantage of intelligent algorithms in the IoT context also translates to having the possibility to equip IoT end-devices (such as sensors, actuators and micro-controllers) with functionalities capable of unleashing the power of ML algorithms on the IoT device itself. Thus, this extends the use of ML in IoT beyond the cloud and more.
Before we go further, we need to talk about end devices and edge devices briefly.
- End devices: end devices or edge servers could be IoT gateways, routers, or microdata centres in mobile network base stations, or on automobiles
- Edge devices: Mobile phones, IoT devices, and embedded devices that request services from edge servers are called edge devices
The critical difference between traditional methods and edge intelligence is that the data processing and application is made locally on the device instead of uploading all data onto a central cloud server.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in a spot close to where data is captured.
Like any other intelligent system, edge devices too, have data collection, training and inference as to its main components.
Data Collection On Edge
In edge intelligence, a distributed data system, known by edge caching, collects and stores the data generated by edge devices. For example, in continuous mobile vision analysis, there are large amounts of similar pixels between consecutive frames. Some resource-constrained edge devices need to upload collected videos to edge servers or the cloud for further processing. With cache, edge devices only need to upload different pixels or frames.
During training, the optimal values for all the weights and biases, or the hidden patterns, are learned based on the training set cached at the edge. Edge training usually occurs on edge servers or edge devices. However, training is much slower compared to that on CPU or GPU.
Inference On Edge
Algorithms are used to infer the testing instance by a forward pass to compute the output on edge devices and servers. Training and inference are complemented with offloading strategies that take care of all the computing power distribution across tasks.
Edge devices will proliferate with the availability of billions of smartphones and increasing 5G users. Smartphones and 5G bases are part of the edge.
Popular hardware components for edge also include industrial sensors, Raspberry Pi, gateways, microchips, collaborative robots, self-driving vehicles, drones and unmanned aerial vehicles.
So top cloud computing players like AWS, Microsoft Azure and Google Cloud have developed services that would bring the advantages of the cloud to the doorsteps of edge devices.
For instance, AWS IoT Greengrass, a service that extends AWS to edge devices, can be worked with a variety of popular programming languages to create and test device software in the cloud before deployment. AWS IoT Greengrass can be used only to transmit necessary information back to AWS, while one can also connect to third-party applications and on-premises software.
Whereas, Google has developed a purpose-built ASIC designed to run inference at the edge. This is designed to deliver high performance in a small physical and power footprint.
According to a survey by McKinsey, edge computing is estimated to create more than $200 billion in hardware value in the next five to seven years. And, with the proliferation of 5G smartphones, combined with the demand for privacy, the ecosystem is ripe for edge devices to flourish and transform the way we interact with digital information.
For more information on edge intelligence, read this report.