MITB Banner

Meet The AI Expert Who Tested Bangla on GPT

In an exclusive interview, Irene Solaiman shared her journey from OpenAI to becoming the Policy Director at Hugging Face

Share

Listen to this story

In 2019, Irene Solaiman was the first few people who started questioning and working on the social impact and bias research of large language models (LLMs).  In an exclusive interview stating her Bangladeshi heritage she told AIM, while she was at OpenAI, she “started prompting GPT-2 and GPT-3 in Bangla.”

“To my knowledge, this is the only test in a non-Latin character language from an OpenAI publication. Google has started doing the same for Bard. I’m not sure where this correlation comes from, but researcher representation goes a long way,” she added. 

Currently, Solaiman is the Policy Director at Hugging Face and a big part of her heart is in research; which is safe, ethical, and responsible to different cultural groups.  After studying human rights policy, Solaiman realized reading human rights violations 12 hours daily is draining. So she learned to code and went straight from graduate school to OpenAI which was transitioning from a nonprofit.

Value Misalignment Paradox

Solaiman loves ‘Star Trek: The Next Generation’. She said, “We should be cognizant of dangers, but also dystopian novels. Several times they reflect historical events, and it’s important to refer back to them as opposed to sci-fi.”

She suggested, to ground ourselves in how people use systems, the effects of systemic issues, and how AI can be used to create goodness but also exacerbate social inequity.

The safety expert is not a fan of the solution-oriented language and states that the cultural value alignment is never going to be solved. “We’re always going to be figuring out how to empower different groups of people. When you treat a group of people as a whole you’re going to be hearing the loudest the people with the most platform or privilege. The feedback is notoriously difficult. Even if we achieve something incredibly powerful, having iterative and continual feedback mechanisms is going to be a continual process,” she said. 

The alignment issue keeps many researchers awake at night like Solaiman. Recently, while talking to AIM, acclaimed thinker Nick Bostrom pondered, “How do we ensure that highly cognitively capable systems — and eventually superintelligent AIs — do what their designers intend for them to do?”. Bostrom has delved deeper into the unsolved technical problem in his book ‘Super Intelligence’ to draw more attention to the subject. Meanwhile, the most infamous instance of misaligned AI happens to be Meta’s racist BlenderBot that hated Mark Zuckerberg.

Read more: ‘Pain in the AIs’ by Nick Bostrom

Solaiman actively talks about alignment problem. For her, it is important to understand what feedback mechanisms look like for the parts of the world where systems are being deployed, but don’t necessarily have that direct input into development (like India). 

The increasing politicization worries her, she said, referring to the white RightWingGPT that claims that systems are too woke when she fine-tuned some language models on human rights frameworks. 

“It’s crazy to me that human rights would be considered woke. We need to have a better understanding of what is just fun and what fundamentally needs to be encoded in systems to respect people’s rights.” said the AI safety expert as she advises to empower not to overwrite different cultures. 

OpenAI vs Open Source 

When she came to Hugging Face in April 2022, Solaiman didn’t have a background in open source. “I was in awe and so enamored by how open source empowers different groups around the world who don’t often have access to these systems to contribute,” she said.  

The big part of her questions for model access and release is what it means to make a model more open. Simply releasing model weights isn’t the most accessible she opined. “When we released GPT-2 we open-sourced the model weights, but it was Hugging Face that created the ‘Write with Transformers’ interface that people including myself started using especially in a time where people might not be affected by AI,” the HF enthusiast added. 

Current Research

Solaiman shared that there’s intense pressure on people in the humanities to master computer science. Having programming skills gives her an insight into a system that otherwise she would not have. “But this training needs to come on both sides for truly interdisciplinary research to work. There needs to be respect and an embedding of people who work on safety and ethics in those developer teams. I feel least empowered to do my work when I’m siloed and have less access to engineering infrastructure.

Currently, Solaiman spends a third of her time building everything from public policy to ensuring that new regulations are technically informed. A lot of the time policymakers have to wear a lot of hats and may not have that level of understanding of what is technically feasible. Guiding that right now is mostly with Western governments. But wishing to have more engagement with the rest of the world. 

The other two-thirds of her work is research. “There’s just a multifaceted ecosystem of what makes systems better. You have to work with policymakers who can coordinate public interest. But you also just have to understand these systems, know to evaluate their behaviors and their impacts,” she added.  

“I don’t fear in the near term AI systems going rogue because people give technical systems their power. We’ve hooked up a lot of our personal lives to social media to our bank accounts. I don’t fear AI systems getting access to nuclear codes. I fear people giving technical systems or autonomous systems this incredible power and access. So it’s really important to focus on the human aspect.” she concluded while mentioning the need for AI regulations. 

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.