What Are The Challenges Of Establishing A TinyML Ecosystem

There is a growing interest in expanding the scope of edge ML to microcontroller-class devices. And, this is where TinyML comes into the picture. As the name suggests, TinyML is intended for developing low power consuming devices that can run machine learning models.

Tiny machine learning applications include hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery-operated devices. 

TinyML enables greater responsiveness and privacy while avoiding the energy cost associated with wireless communication.

With the large scale adoption of 5G around the corner, experts expect tinyML is expected to grow exponentially into a market that encompasses billions of consumer and industrial systems.

Current Landscape And Challenges In TinyML 

via Ericson

TinyML models must be small enough to fit within the tight constraints of microcontroller units (MCU) class devices that have a few hundred KB of memory and limited onboard compute (processor clock speed). 

Popular image classification tasks with large label space are now well suited for low-power always-on applications but are computationally intensive and memory hungry for today’s TinyML hardware. 

In a report published by a team from Harvard University, they have elaborated on the challenges that TinyML can overcome.

Inconsistent Power Usage

Knowing what kind of TinyML model requires some reference point; a benchmark that tells you whether a model suits your needs or not. But TinyML devices can consume different amounts of power, which makes maintaining accuracy across the range of devices difficult. Hence makes things even difficult for benchmarking. Not only that, but it is difficult to determine when data paths and pre-processing steps can vary significantly between devices. And, then there are other factors like chip peripherals and underlying firmware can impact the measurements.

Memory Constraints

As the name suggests, the memory on the device is tailored to the brim. The budget is tight on TinyML systems compared to traditional ML systems like those smartphones that cope with resource constraints in the order of a few GBs, tinyML systems are typically coping with resources that are two orders of magnitude smaller.

These challenges are also the deciding factors in the benchmarking of TinyML use cases. For TinyML to be incorporated into mainstream market places, there needs to be a universally acceptable standard; a benchmark that would allow one to assess TinyML as a service.

However, there is a glimmer of hope, believe the authors, as they believe that the advent of low-power, cheap 32-bit MCUs have revolutionised the computational capability on edge. Arm’s Cortex-M, a well-known ML-based IoT platform, are now regularly performing tasks that were previously thought to be infeasible.

So, when models can fit within the tight on-chip memory constraints, they cut computational costs that usually hamper traditional machine learning platforms. This breakthrough alone can accelerate the widespread adoption, and dispersion of TinyML are reliant on the capability of these platforms.

Future Direction

The privacy and the customisation that these TinyML platforms promise have the potential to break the barriers built by edge and cloud for ML-based applications. There are certainly tradeoffs between memory and performance and a bunch of other factors like peripherals and firmware but, if you want a real-time reliable home pod installed with ML models that run inferences on the go, you need something that resonates with the capabilities of TinyML frameworks.

Another example could be that of augmented reality glasses, which require round the clock power supply and cannot afford any kind of latency that usually surface with cloud or edge devices. This is where TinyML systems flourish.

Taking advantage of intelligent algorithms in the IoT context translates to also having the possibility to equip IoT end-devices (such as sensors, actuators and micro-controllers) with functionalities capable of unleashing the power of ML algorithms on the IoT device itself. This, thus, extends the use of ML in IoT beyond in the cloud and more.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

More Stories


8th April | In-person Conference | Hotel Radisson Blue, Bangalore

Organized by Analytics India Magazine

View Event >>

30th Apr | Virtual conference

Organized by Analytics India Magazine

View Event >>


3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM