Listen to this story
|
“Our motivation has always been to take bigger, bolder steps in AI. And that’s exactly what my team did and strives to do going forward,” said Dr Oren Etzioni, CEO of the Allen Institute for AI (AI2). A few hours later, Oren announced he would step down after nearly nine years with the organisation.
After nearly 9 years, @etzioni is stepping down as AI2's CEO. We are grateful for Oren's incredible dedication and vision in shaping the institute into what it is today, and we look forward to continuing @PaulGAllen's legacy in our mission of AI for Good.https://t.co/2MTbU9v73t
— Allen Institute for AI (@allen_ai) June 15, 2022
Oren’s association with AI2 started in 2010. He was already a force in the field of AI and a serial entrepreneur. Oren also served as a professor at the University of Washington’s Department of Computer Science and Engineering and has over 15 PhDs in areas including machine reading, data mining, web search, and software agents.
“I became increasingly impatient with the steady, incremental progress of the field. I felt like I was getting old, and I had this quest in my heart to understand intelligence and build AI technology. And it just felt like we were moving very slowly”
Oren Etzioni
At the time, Microsoft co-founder Paul G Allen was prospecting for new frontiers across a broad range of areas of science, technology, education, conservation, etc. He had established Seattle-based Vulcan Inc to oversee his business and philanthropic efforts alongside several non-profit scientific institutes to accelerate important areas of research– Allen Institute was one of them. Within a decade, the Allen Institute has expanded from its initial pursuit of understanding the brain to encompass an investigation of the inner workings of cells, an exploration of the human immune system, and the funding of transformative scientific ideas worldwide.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
“The late Paul Allen had his team reach out to me. They said they were going to create a new AI Institute in Seattle and wanted to discuss it. I had to go through a series of interviews. There was one with Peter Clark, with whom I worked closely for many years. He worked from Vulcan and was a very important member of the community. There was a scientific advisory board that included Raj Reddy and Tom Mitchell. There were a lot of good questions asked, but the thing that I remember more than anything was the conversation with Paul Allen. He made it clear that his goal was not to create another university department,” said Oren.
As Oren remembers, Paul said, “I’m already funding research in many places and certainly at the University of Washington’s computer science department. So I don’t want you to just create a group to write more papers.”
“He was looking to build an organisation that would have an outsized impact. He was really fascinated with the question of how a computer could truly understand text, not like the way a search engine understands a text or a GPT-3. He wanted to build a program that could at least take a chapter in a textbook and answer the questions in the back of the book. That’s a hard problem he had his heart set on for years to solve. As an entrepreneur himself, he liked that I wasn’t an academic but an entrepreneur, and it was in my DNA to move fast and focus. He found the combination of ambitious goals and the willingness to move fast really appealing. And that was it. Paul Allen and I hit it off. In a short three months, we launched the Allen Institute for AI as a non-profit in 2014. Our mission was and is AI for the common good. With Paul Allen’s vision and resources, the sky’s the limit,” Oren added.
Over the last eight years, AI2 researchers have published close to 700 seminal papers in AI and ML. AI2 offers several key resources and tools to the AI community, including the AllenNLP library, Semantic Scholar, and the impactful conservation platforms EarthRanger and Skylight.

The AI2 Incubator is a key initiative of AI2, where Etzioni will continue as a Technical Director. Over the last six years, the AI2 Incubator created 19 companies that are collectively worth USD 767 million in valuation, raised USD 164 million in venture funding, and created 500 jobs, and more than half of that growth occurred in the last 12 months.
Common sense is not so common
Of the many projects AI2 has initiated, Oren recalls a particularly interesting one revolving around building machine common sense.
“In general, common sense is a fundamental problem, and computers don’t have common sense. We’re very good at building what I call an intelligent savant. A program that will play go or a program that will detect objects in an image, but these programs are also brittle. If you move them slowly out of their comfort zone, these programs are very easy to break because they don’t really understand what they’re doing. We’ve realised that if we want to build safe computers that are robust and trustworthy, we need to endow them with common sense. And that’s not an easy problem,” said Oren.
AI2 researcher Yejin Choi and her team have been working on a project named MOSAIC to use modern methods like crowd-sourcing and machine learning to infuse common sense in a computer.
Mirrors are not Intelligent
In May 2021, Google announced a 137 billion parameters language model, ‘LaMDA’ – a Transformer-based neural language model specialised for dialogue. Recently, LaMDA made headlines when Google engineer Blake Lemoine claimed it was sentient.
People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn't let us build one. My opinions about LaMDA's personhood and sentience are based on my religious beliefs.
— Blake Lemoine (@cajundiscordian) June 14, 2022
“While I haven’t played with LaMDA directly, I’ve definitely seen this phenomenon. For me, these technologies are effectively a mirror. They just reflect their input and mimic us. So you give it billions of parameters and build its model. And then when you look at it, you are basically looking in the mirror, and when you look in the mirror, you can see glimmers of intelligence, which in reality is just a reflection of what it’s learnt. So what if you scale these things? What if we go to 10 billion or a hundred billion? And my answer is you’ll just have a bigger mirror,” said Oren.
“As humans, we tend to anthropomorphize. So the question we need to ask is, is the behaviour that we see truly intelligent? If we focus on mimicking, we focus on the Turing test. Can I tell the difference between what the computer is saying and what a person would say? It’s very easy to be fooled. AI can fool some of the people all of the time and all of the people some of the time, but that does not make it sentient or intelligent,” he added.
So, how do we know if it’s actually intelligent?
“We can’t just look under the hood of a machine. If you open up my head, you’d see neurons and ions; you wouldn’t see a little person or the intelligent part. Likewise, if you open the hood under a LaMDA, you’ll see a bunch of wires and circuit boards. So how do we tell if it’s intelligent? Firstly, it’s hard, particularly as the technology gets better and better at mimicking. Secondly, it requires robust, sustained, multi-faceted behaviour,” said Oren
I’d ask the Google engineer, who felt that the program was sentient and intelligent, if he’d be willing to have LaMDA determine his investment strategy for retirement, considering it’s intelligent? And the answer will, of course, be no. I don’t trust it because I know that while it can say some things, it’s not actually embodying judgement.
AGI is an illusion
Nando de Freitas, an AI scientist at DeepMind, tweeted, “the game is over” upon Gato’s release. He said scale and safety are the only roadblocks to AGI.
2029 feels like a pivotal year. I’d be surprised if we don’t have AGI by then. Hopefully, people on Mars too.
— Elon Musk (@elonmusk) May 30, 2022
“The large majority of us who are actively working in the trenches building AI systems know that nothing could be further from the truth. The systems we build are doing only the last mile of intelligence. I think what may be one of the best illustrations of this is actually a line that goes all the way back to Pablo Picasso decades ago – Computers are useless. They can only give you answers. And I think the same thing is here. If we frame the question and give it enough training data, and we get everything right, the computer will answer the question, and it will do a good job at least some of the time. But what about asking good questions in the first place? What about the framing of the problem? What about contextual understanding? Common sense? We have not really made any progress on this because it requires very different capabilities,” Oren said.

“Life is not an optimisation problem. Life is what happens in real-time in a very vague context. I’m not optimising my life because I’m not even sure what I want. I’m just trying to figure it out. I’m trying to muddle through. And so I will look at what we’re doing and think that technology is advancing very rapidly, but I would not confuse that with huge progress on what’s a decades-long, centuries-long quest to build human-level intelligence,” he added.