Advertisement

Mitigating harms caused by language models

The usage guidelines must specify domains where the model requires extra scrutiny.
Listen to this story

Recently, OpenAI, along with Cohere, and AI21 Labs, has laid out best practices for developing or deploying large language models.

“The joint statement represents a step towards building a community to address the global challenges presented by AI progress, and we encourage other organisations who would like to participate to get in touch,” OpenAI said.

OpenAI said it is critical to publish usage guidelines to prohibit material harm to individuals and communities through fraud or astroturfing. Guidelines should include rate limits, content filtering, and monitoring for anomalous activities, etc.

The usage guidelines must also specify domains where the model requires extra scrutiny. It is also important to prohibit high-risk use cases like classifying people based on protected characteristics. Further, enforcing these usage guidelines is also key.

Mitigate unintentional harm

Best practices to avoid unintentional harm include comprehensive model evaluation to properly assess limitations, minimise potential sources of bias in training datasets, and techniques to minimise unsafe behaviour, such as learning from human feedback.

Further, it is critical to document known vulnerabilities and biases that may occur. However, no degree of preventative action can eliminate the potential for unintended harm in some cases.

In 2018, Amazon pulled its AI recruiting tool over bias against women applicants. The AI was trained on patterns in resumes submitted over ten years, and most of these resumes were from men.

Collaboration with stakeholders

The importance of building a diverse team with different backgrounds can’t be stressed enough. This helps bring in different perspectives needed to characterise and address how language models will operate in the real world. The inability to bring in diverse perspectives could lead to biases.

“We need to keep in mind this underlying factor all the time. And to reduce the chances of biases creeping into our AI, we first define and buttonhole the business problem we mean to solve, keeping our end-users in mind, and then configure our data collection methods to make room for diverse, valid opinions as they keep the AI model limber and flexible,” Layak Singh, CEO of Artivatic AI, said.

Additionally, organisations should publicly disclose the progress made with regard to LLM safety and misuse to enable widespread adoption and help with cross-industry iteration on best practices. Organisations or institutions should have excellent working conditions for those involved in reviewing model outputs in-house.

Why are these guidelines important?

The guidelines pave the path to safer large language model development and deployment. The Worldwide Artificial Intelligence Spending Guide from International Data Corporation (IDC) forecasts that global spending on AI systems is expected to rise from USD 85.3 billion in 2021 to more than USD 204 billion in 2025. Hence, it is pivotal to have such guidelines to minimise negative impacts.

When the training data is discriminatory, unfair, or toxic, optimisation leads to highly biased models. “The importance of research and development in reducing bias in data sets and algorithms cannot be overstated,” Archit Agrawal, Product Manager, Vuram, said.

According to a paper published in Nature in 2019, a triaging algorithm used by US health providers privileged white patients over black patients.

Similarly, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, developed and owned by Northpointe, was used by US courts to predict how likely a convicted criminal is to commit another crime. ProPublica found the algorithm projected double false positives for recidivism in black offenders as opposed to whites.

“Eliminating bias is a multidisciplinary technique, including ethicists, social scientists, and professionals who are most familiar with the complexities of each application field. As a result, businesses should seek out such professionals for their AI initiatives,” Agarwal said.

Download our Mobile App

Pritam Bordoloi
I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR