Why Google Bard Will Disappoint You

Google’s Bard might just be trained on your emails!
Listen to this story

Google and privacy have long been at odds with each other. But Google may have pushed the envelope again! The tech giant has unveiled its latest chatbot, ‘Bard’, and it’s not just any chatbot—it’s a chatbot that’s possibly trained on your Gmail data. Google is mining your emails for data to train its chatbot. But don’t worry, it’s all in the name of progress.

Kate Crawford, principal researcher at Microsoft, tweeted a concerning screenshot of her interactions with Bard, where the chatbot said that the data from Gmail was included in the dataset that it is trained on. This would be a clear violation of the user privacy policy which clearly states that the company would not use Gmail or Photos data for any purposes whatsoever.

To this, Google’s Twitter account responded with the claim that it’s not true. “It is an early experiment based on LLMs and will make mistakes,” said Google. On a rather amusing note, the original deleted response from Google said that the company takes “good care of our users’ privacy and security”, calling the chatbot ‘Barb’.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Looks like the ChatGPT competitor has gotten off to a bad start. To its credit, Google still calls Bard an “experiment”. The company has thus only made it available for free to early users who signed up for the waitlist. Anyway, hallucinations have been part of all the chatbots that have been released in this generative AI race. ChatGPT probably tops the list for this. 

But, what if it’s true?

Crawford was not the only one who received this response. Davey Alba, a technology reporter at Bloomberg, also got early access to Bard and asked the same questions to which she received the same responses.

The question then is, if the company acknowledges that the chatbot is just answering incorrectly, why can’t they fix it? Well, the truth is, even if something like this spoils Google’s reputation, it is not actually in its control to change the responses that Bard produces. Moreover, manually intervening to control the responses would not be ethically correct on the company’s end

Transparency is the biggest concern when it comes to chatbots like ‘Bard’ or ‘ChatGPT’. Just like OpenAI’s paper about how GPT-4 was a disappointment for researchers because it did not reveal anything about the datasets or the working of the model, Bard’s responses also raise several questions about the dataset it is trained on. Moreover, the company also calls Google Search and Gmail its ‘internal data’, which to some extent, is quite worrisome.

It is true that the companies are competing with each other to build the best AI chatbot that there ever was but hiding the technology isn’t helping them make an ethical case for such bots. Moreover, even if Google is claiming that they want contributions from the users who got early-access to Bard, if there is no reproducibility or transparency within the models then figuring out and fixing the problems within it is too high of an expectation on the company’s part.

Google has had a history of accessing people’s private information. Last year in November, Google had to pay a $391 million fine for tracking users’ location. More recently in January, ‘YouTube’, the parent company’s video sharing platform was also accused of collecting children’s data in the UK. Clearly, the company has to look into following its privacy policy more seriously


Apart from datasets, discussing Bard as a chatbot—the model is very good at producing opinions. But, a lot of them are just judgments and not necessarily based on facts or data. It is similar to how ChatGPT is—quick, often inaccurate, and, thus, deeply flawed. Users have already found it to be making conspiracy theories, citing articles from the New York Times and Washington Post that were never actually published.

Evidently, it has been proven many times that it is very easy to manipulate these chatbots to generate responses that one wants it to, even if that can be something that goes against how the company trained the chatbot to respond. Google is actively taking steps to fix this. Bard replies to offensive questions with: “I am a good AI chatbot and I want to help people.”

For now, Bard thinks that Google will shut it down in the near future. Well, that could be true since it has already suggested that Sundar Pichai should resign and also sides with the Justice Department in the Google anti-trust case.

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox