Listen to this story
|
NVIDIA chief Jensen Huang today took the stage at GTC 2023 to make some captivating advancements in the field of generative AI. “Generative AI has triggered a sense of urgency to develop AI strategies,” he said, followed by a bundle of updates. As predicted by AIM, the theme of GTC 2023 has been mostly generative AI.
Besides this, he announced NVIDIA DGX Cloud, a supercomputing service that provides companies with necessary software and infrastructure, to train advanced AI models for generative AI. It offers immediate access to these resources using just a web browser, removing the complexity of acquiring, deploying and managing on-premises hardware.
Furthermore, the tech giant has announced various updates under the brand name ‘AI Foundations’ for language, visual, and biology model making services. “NVIDIA AI Foundations let enterprises customise foundation models with their own data to generate humanity’s most valuable resources — intelligence and creativity,” announced Huang.
He said that users can now create their own models or start with one of NVIDIA’s pre-trained models and customise from there. Using NVIDIA NeMo, Picasso, and BioNeMo respectively. They can use NeMo and the Picasso (image, video and 3D) service to build their own domain specific models, digital simulation and more. NeMo will increase the relevancy of large language models (LLMs) for businesses by defining areas of focus, adding domain-specific knowledge and teaching functional skills.
Picasso lets users build and deploy generative AI-powered image, video, and 3D applications with advanced text-to-image, text-to-video, and text-to-3D capabilities to supercharge productivity for creativity, design, and digital simulation through simple cloud APIs.
Lastly for AI Foundations, the company separately unveiled new models for the BioNeMo cloud service for biology. It can accurately predict the structure of a protein in seconds. BioNeMo has six new open-source models, in addition to its previously announced generative chemistry models, the company announced, Deepmind’s AlphaFold2, Meta AI’s ESM2, and others. These models paired with BioNeMo can help researchers customise their models on a fully managed software service using the NVIDIA AI Enterprise software suite.
The slew of advancements did not halt. Huang further announced the integration of generative AI into Omniverse platform, alongside revealing Omniverse Audio2Face app, an updated Unreal Engine Connector, open-beta Unity Connector and new SimReady 3D assets for creation and operations of metaverse applications..
Mutual Understanding
.
The company further disclosed a partnership with Adobe to ‘co-develop a new generation of generative AI models’ Some of the models will be marketed through Adobe’s Creative Cloud flagship products like Adobe Photoshop, Adobe Premiere Pro, and Adobe After Effects, as well as through the new Picasso for reach to third-party developers. The partnership’s primary priority is to support commercial viability of the upcoming technology and ensure content transparency and Content Credentials powered by Adobe’s Content Authenticity Initiative.
Getty Images along with Shuttershock will be using the newly revealed Picasso services, trained on their library of legally licensed images. The other companies that will be using AI Foundations models include Morningstar, a financial services firm and Quantiphi, an AI first digital engineering company.
Morningstar is utilising large language models to gather insightful data from complex structured and unstructured content on a larger scale. This approach emphasizes the importance of data quality and speed, said Shariq Ahmad, the head of Data Collection Technology at Morningstar. The firm is currently using NeMo to develop research on how LLMs can efficiently scan and summarise financial documents to extract market intelligence.
NVIDIA is also working with Quantiphi’s service baioniq, to let enterprises build customised LLMs equipped with up-to-date information.