MITB Banner

Watch More

NVIDIA Announces The Release of New And Improved Accelerated Computing Libraries

65 SDKs for advancing quantum computing, last-mile delivery, supercomputing, and more for the PyData ecosystem.

NVIDIA announced the release of 65 new and updated software development kits (SDKs), which include libraries, code samples, and documentation. These SDKs enhance the features and capabilities available to data scientists, researchers, students, and developers working on a wide variety of computing tasks.

NVIDIA founder and CEO Jensen Huang announced the enhancements during his GTC keynote address. They include next-generation SDKs for speeding quantum computing, last-mile delivery techniques, and graph neural network mining.

New SDKs 

NVIDIA ReOpt delivers innovative, massively parallel algorithms for optimising truck routes, warehouse selection, and fleet mix in real-time logistics. Its dynamic rerouting capabilities can significantly reduce travel time, fuel expenses, and idle time, potentially saving the logistics and supply chain companies billions.

cuNumeric, for array computing, uses the NumPy application programming interface for automated scaling to multi-GPU and multi-node systems with no code changes – a significant benefit for the Python community of over 20 million data scientists, researchers, and scientists. It is immediately available on GitHub and Conda and grows to hundreds of GPUs, enabling accelerated computing for the PyData and NumPy ecosystems.

cuQuantum, for quantum computing, enables researchers to examine a greater area of methods and applications by considerably speeding up the simulation of huge quantum circuits. Developers can model near-term variational quantum algorithms for molecules and error correction algorithms for identifying fault tolerance, in addition to accelerating popular quantum simulators from Atos, Google, and IBM.

The CUDA-X accelerated DGL container for graph neural networks enables developers and data scientists to quickly set up a working environment for GNNs with big graphs. The container simplifies the process of working in a GPU-accelerated GNN environment that combines DGL and Pytorch. Even the world’s largest networks, with close to a trillion edges in a single graph, can be mined for insights using GPU-accelerated GNNs. For example, Pinterest leverages billions of nodes and edges in graph neural networks to understand their ecosystem of over 300 billion Pins, utilising GPUs and specialised libraries for model training and inference.

“Our team is thrilled to partner with NVIDIA to accelerate DGL with RAPIDS cuDF for graph creation, RAPIDS cuGraph for graph sampling, and specialised compute kernels for GNNs,” said Alex Smola, director of Machine Learning at Amazon Web Services. “DGL is offered as an open-source project and as a managed service via Amazon NeptuneML.”

Updated SDKs

Updated SDKs increase application development speed. A variety of NVIDIA’s most popular SDKs have been enhanced and upgraded, including the Clara, DLSS, RTX, Nsight, and Isaac kits.

RAPIDS 21.10 for data science introduces new tools for manipulating time series data and improves the performance of numerous current algorithms. Without requiring any code modifications, the RAPIDS Accelerator for Apache Spark 3.0 enables organisations to accelerate their analytics operations on NVIDIA GPUs. With a 400 per cent growth in downloads this year, RAPIDS is one of NVIDIA’s most popular SDKs.

Deepstream 6.0 for intelligent video analytics provides a new graph composer interface that makes computer vision accessible to non-programmers and a visual drag-and-drop interface for a simple, intuitive AI product development flow.

Triton 2.15, TensorRT 8.2, and cuDNN 8.4 – all have new optimisations for big language models and inference acceleration for gradient-boosted decision trees and random forests.

DOCA 1.2 for data centre networking provides a zero-trust security framework that enhances threat prevention through hardware and software authentication, line-rate data encryption, a distributed firewall, and intelligent telemetry.

Merlin 0.8, for recommender systems, introduces additional capabilities for predicting a user’s future action with little or no user input and support for models larger than the GPU RAM available.

To learn more, visit the NVIDIA Developer Zone.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Dr. Nivash Jeevanandam

Dr. Nivash Jeevanandam

Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories