Listen to this story
|
Everyone is suddenly talking about AI and so is John Oliver. In the latest episode of Last Week Tonight, Oliver spoke at length about AI’s striking capabilities right from StabilityAI’s Midjourney to ChatGPT, the fastest-growing consumer app in history. Touching upon the perils of technology, Oliver said the problem is that AI is “stupid in ways we can’t always predict”.
Continuing the segment, Oliver said it is incredible to see AI do things that most humans couldn’t, but that’s where the host misses a point. AI visionary Yann LeCun, in an exclusive conversation with AIM, had explained that humans have the ability to break down problems step-wise into smaller ones until resolved. Planning requires the ability to predict the consequence of one’s actions and that is something AI can’t do.
Fans of Oliver will remember that this isn’t his first take on AI. Back in August, he slid down the rabbit hole of image generation tools like DALL-E 2 and its rival Midjourney. But the recent episode explores the impacts of technology from a critical perspective and not just as an entertainment tool.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Raises Concerns
In the latest episode of Last Week Tonight, Oliver raised some valid concerns about AI’s impact on employment, education and even art. But this discussion about technology making a lot of artistic jobs obsolete began a long time ago when the internet was assumed to replace newspapers and printing presses.
Micha Kaufman, the founder and chief of Fiverr, a global online marketplace for freelance services, recently wrote an open letter calling for a truce between AI and humans, calling upon the latter to unleash technology’s true potential. As a result, Fiverr registered a 1400% increase in demand for AI-related services.
Also, And Brill, CEO at Thinkin Labs, posted a note which required job applicants to mention if their cover letter or resume was AI-generated. He further stated that this would actually have a positive impact on the applications. Similarly, Ujjawal Chadha, a software engineer at Microsoft, shared that AI will not replace software developers anytime soon. However, building skills in domains will help them stay relevant. So, as a developer, it is crucial to upskill yourself with AI skills instead of fearing being replaced.
Read more about the job that AI cannot touch.
Alex Hanna, former Google AI ethicist, told AIM recently that the data used to train models like GPT-3.5, or LaMDA is either proprietary or just scraped off the internet. “Not a lot of attention is paid to the rights of the people in those data and also the people who have created those data, including artists, writers, etc,” Hanna said, explaining that artists are not compensated and are often considered an afterthought by the companies.
Lately, artists have sued such organisations. For instance, Sarah Andersen, Kelly McKernan, and Karla Ortiz dragged image generation tool companies Midjourney, Deviant Art and Stability.AI to court for using their work without consent. Pointing at David Holz, Midjourney founder’s relaxed view on data theft, Oliver quipped, saying, “I am not really surprised. He looks like hipster Willy Wonka answering a question on whether importing Oompa Loompas makes him a slave owner.”
Thinking Outside The Black Box
Besides other problematic impacts, Oliver spoke about the black box problem inside machines, which isn’t entirely true. DeGrave and Joseph Janizek, members of Lab of Explainable AI for Biological and Medical Sciences demystified the problem. In an attempt to explain the way mysterious neural networks function, they wrote, deep neural networks involve finding out what characteristics of the input data are affecting the results, and using that to infer what’s happening inside the black box.
Further, Oliver addressed ways in which AI can be a bane for the underprivileged. “When Instagram was launched, the first thought wasn’t, “this will destroy teenage girls’ self-esteem”, Oliver recalled.
A decade ago, deep learning labs started getting attention from tech giants. While the majority started using the technology for personal gains, a few expressed genuine interest. Today, there are several reasons to believe how some of these technologies are being used to cause more harm than good as conveyed by Yoshua Bengio in an interview with AIM.
Agreeing with Oliver’s comment, Hanna questioned how this was going to serve the most marginalised people right now. “The big tech is currently too focused on language models because the release of this technology has proven to be impressive to the funder class—the VCs—and there’s a lot of money in it,” she shared.