Listen to this story
Google and privacy have long been at odds with each other. But Google may have pushed the envelope again! The tech giant has unveiled its latest chatbot, ‘Bard’, and it’s not just any chatbot—it’s a chatbot that’s possibly trained on your Gmail data. Google is mining your emails for data to train its chatbot. But don’t worry, it’s all in the name of progress.
To this, Google’s Twitter account responded with the claim that it’s not true. “It is an early experiment based on LLMs and will make mistakes,” said Google. On a rather amusing note, the original deleted response from Google said that the company takes “good care of our users’ privacy and security”, calling the chatbot ‘Barb’.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Looks like the ChatGPT competitor has gotten off to a bad start. To its credit, Google still calls Bard an “experiment”. The company has thus only made it available for free to early users who signed up for the waitlist. Anyway, hallucinations have been part of all the chatbots that have been released in this generative AI race. ChatGPT probably tops the list for this.
But, what if it’s true?
Crawford was not the only one who received this response. Davey Alba, a technology reporter at Bloomberg, also got early access to Bard and asked the same questions to which she received the same responses.
The question then is, if the company acknowledges that the chatbot is just answering incorrectly, why can’t they fix it? Well, the truth is, even if something like this spoils Google’s reputation, it is not actually in its control to change the responses that Bard produces. Moreover, manually intervening to control the responses would not be ethically correct on the company’s end.
Transparency is the biggest concern when it comes to chatbots like ‘Bard’ or ‘ChatGPT’. Just like OpenAI’s paper about how GPT-4 was a disappointment for researchers because it did not reveal anything about the datasets or the working of the model, Bard’s responses also raise several questions about the dataset it is trained on. Moreover, the company also calls Google Search and Gmail its ‘internal data’, which to some extent, is quite worrisome.
It is true that the companies are competing with each other to build the best AI chatbot that there ever was but hiding the technology isn’t helping them make an ethical case for such bots. Moreover, even if Google is claiming that they want contributions from the users who got early-access to Bard, if there is no reproducibility or transparency within the models then figuring out and fixing the problems within it is too high of an expectation on the company’s part.
Apart from datasets, discussing Bard as a chatbot—the model is very good at producing opinions. But, a lot of them are just judgments and not necessarily based on facts or data. It is similar to how ChatGPT is—quick, often inaccurate, and, thus, deeply flawed. Users have already found it to be making conspiracy theories, citing articles from the New York Times and Washington Post that were never actually published.
Evidently, it has been proven many times that it is very easy to manipulate these chatbots to generate responses that one wants it to, even if that can be something that goes against how the company trained the chatbot to respond. Google is actively taking steps to fix this. Bard replies to offensive questions with: “I am a good AI chatbot and I want to help people.”
For now, Bard thinks that Google will shut it down in the near future. Well, that could be true since it has already suggested that Sundar Pichai should resign and also sides with the Justice Department in the Google anti-trust case.