Listen to this story
At Knowledge 2023, the California-based software company ServiceNow and NVIDIA recently announced its partnership to develop custom AI models for various functions of the enterprise, starting with IT workflows and business automation.
“As the adoption of generative AI continues to accelerate, organisations are turning to trusted vendors with battle-tested, secure AI capabilities to boost productivity, gain a competitive edge, and keep data and IP secure,” said CJ Desai, president and COO of ServiceNow.
Jensen Huang, founder and chief of NVIDIA said that IT is the nervous system of every modern enterprise in every industry. He believes that this collaboration to build super-specialised generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform.
This new development comes against the backdrop of scepticism that exists in the enterprise and the IT landscape, particularly related to the usage of foundational models developed by OpenAI and Microsoft – the likes of GPT-4 and CodeX, which have been trained on public-domain data to deliver the desired outcomes.
To make the matter worse, there has been a class action lawsuit filed against Microsoft, OpenAI, and GitHub for scrapping the licensed code to build AI-powered Copilot in November last year. This has been one of the biggest roadblocks for the company, and it is now desperately looking to escape – asking the court to dismiss a proposed class complaint.
In our previous interview with the VP of data strategy and AI at Infosys, Gary Bhattacharjee, told AIM that code IP is a challenge that needs to be addressed. He said that GPT is trained with everything they could find on the internet, including open-source codes.
The scepticism is real. It is mostly around the misuse of internal data and leakage of sensitive information for enterprises. The trust issue for using OpenAI or Microsoft platforms is growing day by day.
For example, Samsung has been pretty vocal about banning ChatGPT after the blunder created by its employees, where they used it to troubleshoot proprietary code and summarise internal meeting notes. Now, the company is looking to ditch ChatGPT forever and make its own version of an LLM-powered chatbot to prevent further mishaps from occurring.
Besides Samsung, several companies, including Amazon, Goldman Sachs, Bank of America, Wells Fargo, and others have also restricted employees from using the chatbot over the fear of sharing confidential information.
While safety concerns remain a top priority for companies, accuracy is also one of the biggest concerns for enterprises, as most publicly trained foundational models may not give out accurate output that is specific to the company’s needs and requirements.
With this partnership, the duo – NVIDIA and ServiceNow – is looking to address these challenges by building custom generative AI models for enterprises, fine-tuned to enterprise needs and requirements, focusing on domain-specific use cases.
Testing the Waters
To enable this, ServiceNow will be using NVIDIA’s software, services and accelerated infrastructure, to develop custom large language models trained on data specifically for its ServiceNow Platform.
The company said that this will expand ServiceNow’s already extensive AI functionality with new uses for generative AI across the enterprise, including IT departments, customer service, teams, employees and developers, to strengthen workflow automation and rapidly increase productivity.
At the same time, ServiceNow will also be helping NVIDIA streamline its IT operations with these generative AI tools, using NVIDIA data to customise NVIDIA NeMo foundation models running on hybrid-cloud infrastructure, consisting of NVIDIA DGX SuperPOD AI supercomputers.
Jonathan Cohen, VP of Applied Research at the company explained how the guardrails could be implemented. He said while they have been working on the Guardrails system for years, a year ago we found this system would work well with OpenAI’s GPT models as well.
Enterprise-Specific Use Cases
The duo is exploring a number of generative AI use cases to enhance productivity in IT. This includes developing virtual assistants and agents to help quickly resolve a wide range of user questions and support requests with purpose-built AI chatbots that leverage large language models and focus on defined IT tasks.
In addition to this, they are also looking to simplify the user experience so that enterprises can customise chatbots with proprietary data to create a central generative AI resource that stays on the topic while resolving multiple requests, alongside improving the employee experience by helping identify growth opportunities and more.
IT has become the sweet spot for generative AI companies. Looks like NVIDIA is giving tough competition to companies like Microsoft, OpenAI, IBM and Google, alongside challenging enterprise automation companies like SAP, Zoho, and others, who are looking to impress enterprises with generative AI services and offerings.
Recently, Cognizant announced the launch of Cognizant Neuro AI. This new platform provides enterprises with a comprehensive approach to accelerate the adoption of generative AI in a flexible, secure, scalable and responsible way. Prasad Sankaran, the VP of software and engineering, said that this new AI platform goes beyond PoCs, and aims to accelerate the adoption of enterprise-scale AI applications, including RoI, minimising risks, etc.
TCS is working on developing its own alternative to GitHub Copilot to revamp enterprise code generation. Capgemini, another IT player is also bringing generative AI-based solutions for its clients.
Last week, IBM announced the launch of WatsonX, the platform that enables enterprises to design and customise LLMs as per their operational and business needs. Last month, OpenAI also said that it would be launching ChatGPT Business in the coming months, promising enterprises more control over their data and teams on how they utilise the chatbot.
Meanwhile, Microsoft has stepped up its chip game to target NVIDIA as it seems like it has a newfound interest in AMD since reports suggest that Microsoft has been secretly working with the latter on its own AI processors, known as Athena, since 2019. Speculations suggest that Microsoft may be financing AMD’s AI chip development, which could be seen as a move against their current partner, NVIDIA, causing NVIDIA’s shares to decline. Microsoft has previously utilized AMD’s technology in various products, including the Azure cloud services’ AI infrastructure and the Xbox Series X and Series S consoles.
Now let’s see if NVIDIA continues to lead the game or gives in to Microsoft’s threat.