MITB Banner

Salesforce Anchors Trust in the Generative AI Era

Salesforce’ chief scientist believes the inability to build trust among consumers could lead to the next AI winter.

Share

Listen to this story

A recent article in Harvard Business Review, titled ‘Why Adopting GenAI is so Difficult’, noted that a wide range of businesses, from large corporations to small enterprises, face challenges in AI integration. It’s not limited to generative AI, but also conventional forms of AI, including rule-based algorithms and machine learning, into their operations.

One of the major reasons for this is the apprehension of using a technology they don’t fully understand. Here it becomes cardinal for enterprises to instil trust in AI among consumers. 

While one of the biggest announcements at TrailblazerDX, Salesforce’s developer conference, was Einstein 1 Studio, another prevailing theme of the event was the imperative to establish trust in AI for enterprises. 

Numerous Salesforce spokespersons emphasised the significance of cultivating trust in AI among consumers, particularly as Salesforce introduces new AI features into its platform.

Indeed, despite the numerous advantages that LLMs bring, businesses still feel cautious. This hesitation arises from the concern that these models can produce hallucinated and occasionally inaccurate responses and also pose a potential risk of leaking sensitive customer or enterprise data.

Silvio Savarese, Salesforce’s chief scientist, even said that the inability to build consumer “trust could lead to the next AI winter”. 

( Salesforce co-founder and CTO Parker Harris speaking at the keynote of TrailblazerDX 2024)

Building Trust in AI 

Hence, in an effort to enhance the reliability of LLMs, Salesforce introduced the trust layer last year. 

This secure intermediary safeguards user interactions with LLMs by masking personally identifiable information (PII), monitoring output toxicity, ensuring data privacy, preventing the persistence of user data, and prohibiting its use in additional training. 

Over time, the CRM company has added different components to improve the trust layer. For example, it is employing another LLM to detect any indications of toxic behaviour, biases, or potentially offensive or harmful content.

Savarese said that they are currently developing a component focused on generating confidence. This component assesses the AI’s certainty in producing a specific output. 

“The confidence metric can be employed to consider involving a human in the loop for further verification or evaluation, potentially requiring three to four rounds of scrutiny,” Savarese told AIM during an interaction at the sidelines of TrailblazerDX, held at San Francisco. 

He also mentions that another crucial component under development for incorporation into the trust layer is explainability. After the LLM generates the output, the aim is to elucidate how certain outputs are produced, explaining the decision-making process and the steps involved in creating each specific output. 

Is it Working?

Despite extensive endeavours to eliminate hallucinations in LLMs, achieving complete success has proven challenging. Nevertheless, enterprises akin to Salesforce have embraced diverse approaches to mitigate or control the levels of hallucinations. 

While instilling trust in AI is crucial among consumers, the lingering question is the effectiveness of these efforts. Muralidhar Krishnaprasad, EVP, software engineering at Salesforce, said that in the initial stages of last year, there was considerable fear and uncertainty as people were unfamiliar with LLMs.

“However, the apprehension has somewhat diminished, particularly because stakeholders perceive our trust layers as providing a safeguard, grounding the technology with user data,” Krishnaprasad told AIM.

“Over the past year, there has been a fluctuating trend where confidence has grown, acknowledging the efficacy of the trust layers and ensuring a sense of safety to build further innovations atop the technology,” he added.

Nonetheless, certain regulated industries may still harbour concerns, particularly due to cautious government oversight in these specific areas. But, according to Krishnaprasad, even Salesforce’s public sector clients are showing great interest in AI.

Trusting AI with customer data

Last year, the San Francisco-headquartered company officially launched Data Cloud after announcing it at the previous year’s Dreamforce event. 

Data Cloud allows Salesforce customers to bring all their data into one place and harness the power of unified data for enhanced customer insights, personalised engagement, and seamless integration across the Salesforce platform – all with the help of AI.

As customers migrate their data to the cloud, Salesforce must ensure the grounding of LLMs, preventing potential data leaks and mitigating the risk of LLMs generating highly biassed outputs.

Savarese does ensure the absolute safety of bringing customer data to the Salesforce cloud. However, similar to the challenge of hallucinations in LLMs, one cannot entirely rule out the possibility of AI encountering issues in certain instances.

“Salesforce’s business is based on the fact that we take good care of customer data,” he said.

Salesforce also allows customers to use their own data lakes. “In this scenario, we introduce a metadata layer. We don’t host customers’ data but establish a critical layer essential for AI operations. Consequently, the cloud is positioned atop customers’ data lakes.”

Additionally, Savarese emphasises that customer data is strictly excluded from the model training process. The data fed to the models is neither retained nor employed by the model for self-training purposes.

“This framework is a crucial structure not only for our proprietary models but also for third-party vendor models, like the GPT models by OpenAI or the Claude series by Anthropic.”

Einstein 1 Studio

Building trust in AI also becomes crucial with the launch of Salesforce’s new AI offering, the Einstein 1 studio, which has three components. First is Copilot Builder which allows developers to create custom AI actions to accomplish specific business tasks. 

“It enables the creation of actions or references to existing actions, which may reside in flow, APEX, etc., and then registers them with the copilot so that the copilot knows what tasks the developers can execute. Additionally, the copilot builder encompasses debugging tools that provide insights into the correctness of the plan,” Krishnaprasad said.

The second component is Prompt Builder, which allows users to build and activate custom prompts in the workflow. 

“The prompt builder allows you to create prompts and integrate them with data sourced from CRM or the data cloud. It automatically triggers the LLM, retrieves results, and enables the utilisation of prompts across the platform,” Krishnaprasad explained.

Lastly, the third component of Einstein 1 Studio is Model Builder, where developers can build or import a variety of AI models. Krishnaprasad explained that Model Builder also has three components—one is predictive modelling, where users can create their own predictive models or import existing models from platforms like AWS SageMaker. 

“Additionally, users can utilise pre-built models or bring their own LLMs, fine-tuning and customising them for broader usage across the stack. This seamlessly integrates with the Copilot and Prompt Builder functionalities, providing a comprehensive toolset for refining predictive channels,” Krishnaprasad added.

The features are impressive and give Salesforce an edge in the CRM market, Gaurav Kheterpal, CEO at Vanshiv Technologies, and also a Salesforce Trailblazer, concurs. 

“I’ve tested it, and based on what I’ve seen and experimented with using a few small datasets, it appears to perform exceptionally well. Even if the product delivers only half of its claim, it would still be a significant success. Fortunately, they seem well-equipped to handle the surge in demand for generative AI technologies,” he told AIM.

Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.