Listen to this story
The dangers of not understanding innovation money are best illustrated by the battle between two inventors: Thomas Edison and Nikola Tesla. The former produced inventions largely through trial and error yet capitalised it.
Tesla’s ideas arguably were brilliant. The visionary was even described by Edison as someone whose “ideas are magnificent.” But he was simply unable to attract the right financial resources to commercialise his ideas.
Currently,generative AI models are having an Edison moment. The companies making the models accessible though products and services are thriving more than the research which can be comparatively more impactful. Tech bros who don’t want to miss the opportunity; currently implementing oversold technologies should remember some fundamental realities about tech bubbles.
These phenomena’s shelf life depends upon the narratives about how the specific technology will affect societies and economies, as professors Brent Goldfarb and David Kirsch noted in their 2019 book, Bubbles and Crashes: The Boom and Bust of Technological Innovation. Unfortunately, the early narratives that emerge around new technologies commonly fail to meet expectations.
GenAI is not AI
Companies are running after the text-to-anything as we’ve noticed during their latest annual conferences; Google I/O, Microsoft Build, and IBM Think. The companies have been in an aggressive tussle since the release of ChatGPT, in late November last year. The developer’s who showed off their latest technologies have been working 24/7, and really hard, under pressure to deliver the tech, and the same was resonated by some of the speakers at these conferences.
Google chief Sundar Pichai kicked off the Google I/O 2023 with “AI is having a very busy year” and then announced a list of Google’s products which will henceforth be integrated with its language model PaLM 2. Similarly during Microsoft Build, the company chief Satya Nadella chanted the ‘Copilot’ mala. Even though the investors were impressed with AI being embedded in every facet of the Redmond giant. However, the shares fell fractionally.
Earlier, in February’s Paris debacle when Google promoted its AI chatbot Bard, the company’s stock sank 7%, Google employees responded by describing Pichai’s announcement as “rushed” and “botched.”
Meanwhile, other brilliant innovations are turned blind eye towards or not given enough appreciation. Out of the 100 announcements at the I/O, around 40 were about generative AI; and there was hardly any new update related to AlphaFold, which is revolutionising the life sciences landscape.
Other jaw dropping feats which have been turned a blind eye towards due to the distraction caused by language models include, DragGan, Meta’s CICERO. DeepMind’s nuclear fusion controlling algorithm and the list goes on.
“Looking back, it’s amazing how easy things were for researchers when I was a young man. In comparison to just how competitive the field has become,” the 80-year-old American computer scientist Jeffrey Ullman told AIM, saying that academics and researchers who could be excellent teachers, or focus on other groundbreaking innovation, are forced into doing second and third grade research because that is how they get recognised or promoted.
The Shiny Object Syndrome
The AI wave highlights the persuasive issue known as the shiny object syndrome; common in the tech industry as researchers get easily distracted by novel tools and trends. In an edition of ‘Letters by Andrew Ng,’ the founder of DeepLearning.AI stated that AI has an Instagram problem. “I’m here to say: Judge your projects according to your standard, and don’t let the shiny objects make you doubt the worth of your work!” he declared.
He further addressed people doubting their work’s worth and judging it as per perfect standards set by the media. He wrote, ‘Just as pictures of people’s perfect lives in the media aren’t representative, pictures of AI developers’ postings of their amazing projects also aren’t representative.’
On a similar note, Andrew Ng’s mentor Michael Irwin Jordan told AIM that most of these are buzzwords. “Just because you’re using computer vision or ChatGPT as some part of that doesn’t necessarily change anything,” he said.
Apart from the false promise of technology affecting research it also takes a toll on the global economy. The Bank of America strategist Michael Hartnett recently noted that tech and AI are forming a risky bubble. Experts are also predicting the Silicon Valley’s darling GPT bubble to cause a meltdown similar to the major dot.com bubble which led to a stock market crash in early 2000.
The bottom line is, the internet’s favourite language models are definitely entertaining for the public eye and the industry’s cash cow but the technology is being oversold which is anticipated to disrupt the purpose of research. Amid the chaos in tech, one should recall when Tesla pinpointed impatience as researchers’ problem. Highlighting the eagerness of their ideas to work he had said, “They want to try their first idea right off; and the result is they use up lots of money and lots of good material, only to find eventually that they are working in the wrong direction. We all make mistakes, and it is better to make them before we begin.”