Now Reading
TAO Marks NVIDIA’s Entry Into MLOps Space

TAO Marks NVIDIA’s Entry Into MLOps Space

  • New NVIDIA Workflow leverages technology that is readily available, then simplifies the AI workflow with NVIDIA TAO and NVIDIA Fleet Command to make the trip shorter and less costly.

“NVIDIA invested hundreds of millions of GPU compute hours over more than five years refining these models”

The world is keenly watching the pretty cool technologies debuting at the ongoing NVIDIA GTC Summit 2021, including Omniverse, DGX SuperPOD, AERIAL AI on 5G, Bluefield-3 DPU and many more. Of all the major announcements, TAO stood out for marking the chip giant’s foray into MLOps.

 The growing demand for AI-based solutions and platforms, and the need to parse huge datasets are expected to propel the growth of the enterprise AI market. A MarketsandMarkets study projected the enterprise AI market to reach $ 6,141.5 million valuation by 2022.

NVIDIA enters MLOps space

In its GTC Virtual Summit 2021, Nvidia announced Train, Adapt and Optimise (TAO) – a GUI based framework designed to make the development of enterprise AI applications and services easier and faster. TAO applies transfer learning and other ML algorithms to finetune enterprise data with Nvidia’s model.

TAO integrates Transfer Learning toolkit from Nvidia to generate customised AI models in hours rather than months by fine-tuning pre-trained models, removing the need for massive training runs and deep AI expertise. TAO also includes federated learning, that allows multiple machines to securely collaborate, train a shared model, and to improve a model’s accuracy. Users can exchange model components while protecting data privacy, and ensure the data is safe inside each company’s data center. For example, researchers at different  hospitals can collaborate on one AI model while keeping their data separate to protect the patient’s privacy.

Image credits: NVIDIA

TAO also has Nvidia TensorRT, which optimises a model’s mathematical coordinates for the device it runs on, balancing the smallest model size with the highest accuracy. TensorRT-based games, according to Nvidia, perform up to 40 times faster during inference than CPU-only platforms. With NVIDIA TensorRT, users can optimise models for high-throughput, low-latency inference.

MLOps goes mainstream

Data science is changing the way businesses solve complex problems with huge datasets. MLOps infuses AI/ML activities with the discipline and reliability of DevOps. The mission is to develop ML-intensive applications with continuous integration and delivery (CI/CD).

See Also

Of late, a lot of large businesses have started sharing details about their in-house ML stack: Examples include, Facebook (FBLearner Platform), Twitter, Netflix, Airbnb (Zipline), Uber, AWS (Lookout for Equipment), etc. 

With TAO, NVIDIA showed it is capable of accelerating AI  development by over 10 times by allowing users to fine-tune pre-trained models for voice, vision, natural language understanding, and other applications downloaded from Nvidia’s NGC catalogue.

Nvidia said elements of TAO are already in use in warehouses, retail, hospitals, and factory floors. The major beneficiaries include companies such as Accenture, BMW and Siemens Digital Industries.

According to Adel El Hallak, Director of AI at NVIDIA, many companies lack the specialised skills, access to large datasets or accelerated computing that deep learning requires. Whereas, others are realising the benefits of AI and want to deploy them quickly across products and services. “For both, there’s a new roadmap to enterprise AI. New NVIDIA Workflow leverages technology that is readily available, then simplifies the AI workflow with NVIDIA TAO and NVIDIA Fleet Command to make the trip shorter and less costly,” wrote Hallak in his blog.


Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top