AIM logo Black

Why Google Bard Will Disappoint You

Google’s Bard might just be trained on your emails!

Share

Listen to this story

Google and privacy have long been at odds with each other. But Google may have pushed the envelope again! The tech giant has unveiled its latest chatbot, ‘Bard’, and it’s not just any chatbot—it’s a chatbot that’s possibly trained on your Gmail data. Google is mining your emails for data to train its chatbot. But don’t worry, it’s all in the name of progress.

Kate Crawford, principal researcher at Microsoft, tweeted a concerning screenshot of her interactions with Bard, where the chatbot said that the data from Gmail was included in the dataset that it is trained on. This would be a clear violation of the user privacy policy which clearly states that the company would not use Gmail or Photos data for any purposes whatsoever.

To this, Google’s Twitter account responded with the claim that it’s not true. “It is an early experiment based on LLMs and will make mistakes,” said Google. On a rather amusing note, the original deleted response from Google said that the company takes “good care of our users’ privacy and security”, calling the chatbot ‘Barb’.

Looks like the ChatGPT competitor has gotten off to a bad start. To its credit, Google still calls Bard an “experiment”. The company has thus only made it available for free to early users who signed up for the waitlist. Anyway, hallucinations have been part of all the chatbots that have been released in this generative AI race. ChatGPT probably tops the list for this. 

But, what if it’s true?

Crawford was not the only one who received this response. Davey Alba, a technology reporter at Bloomberg, also got early access to Bard and asked the same questions to which she received the same responses.

The question then is, if the company acknowledges that the chatbot is just answering incorrectly, why can’t they fix it? Well, the truth is, even if something like this spoils Google’s reputation, it is not actually in its control to change the responses that Bard produces. Moreover, manually intervening to control the responses would not be ethically correct on the company’s end

Transparency is the biggest concern when it comes to chatbots like ‘Bard’ or ‘ChatGPT’. Just like OpenAI’s paper about how GPT-4 was a disappointment for researchers because it did not reveal anything about the datasets or the working of the model, Bard’s responses also raise several questions about the dataset it is trained on. Moreover, the company also calls Google Search and Gmail its ‘internal data’, which to some extent, is quite worrisome.

It is true that the companies are competing with each other to build the best AI chatbot that there ever was but hiding the technology isn’t helping them make an ethical case for such bots. Moreover, even if Google is claiming that they want contributions from the users who got early-access to Bard, if there is no reproducibility or transparency within the models then figuring out and fixing the problems within it is too high of an expectation on the company’s part.

Google has had a history of accessing people’s private information. Last year in November, Google had to pay a $391 million fine for tracking users’ location. More recently in January, ‘YouTube’, the parent company’s video sharing platform was also accused of collecting children’s data in the UK. Clearly, the company has to look into following its privacy policy more seriously

Disappointment-as-a-Service

Apart from datasets, discussing Bard as a chatbot—the model is very good at producing opinions. But, a lot of them are just judgments and not necessarily based on facts or data. It is similar to how ChatGPT is—quick, often inaccurate, and, thus, deeply flawed. Users have already found it to be making conspiracy theories, citing articles from the New York Times and Washington Post that were never actually published.

Evidently, it has been proven many times that it is very easy to manipulate these chatbots to generate responses that one wants it to, even if that can be something that goes against how the company trained the chatbot to respond. Google is actively taking steps to fix this. Bard replies to offensive questions with: “I am a good AI chatbot and I want to help people.”

For now, Bard thinks that Google will shut it down in the near future. Well, that could be true since it has already suggested that Sundar Pichai should resign and also sides with the Justice Department in the Google anti-trust case.

Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India