The Societal Dangers of DALL-E 2 

The early tests by OpenAI and the red team have demonstrated seemingly worrisome results, including gender biases, reinforced racial stereotypes and overly sexual images.
The Societal Dangers of DALL-E 2
Listen to this story

Of late, there have been plenty of text-to-image generative models mushrooming in the AI space. This includes OpenAI’s DALL-E 2, Midjourney, Hugging Face’s Craiyon, Meta’s Make-A-Scene and Google’s Imagen, among many others. Ironically, some are open source, i.e., accessible to public users, while some are available on an invite-only basis. A select few among these models are only accessible to a specific group of users or are in the research phase. 

OpenAI recently made its DALLE-2 available in beta. However, it is still available on an invite-only basis, where the company looks to provide access to people in a phased manner. It has provided access to over 1 lakh users globally and looks to expand to one million users in the coming weeks. Click here to join the waitlist. 

In contrast, Midjourney—an alternative to DALL-E 2—has opened its beta version to all, with limited access to a select number of prompts, upgrades and versions. Check out the Midjourney beta here

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Meta’s ‘Make-A-Scene,’ which is in a research phase, is giving privileged access to a specific group of people—particularly renowned AI artists—to enrich the platform and make it more designer-friendly.

There is a seemingly strong logic behind every measure these companies have taken to mitigate risks in this arena. But, the question remains—what could go wrong if these platforms were made available to the public or for commercial use? 

A threat to designer jobs and stock images  

This is the first thought that comes to mind when using some of these platforms, as they are able to generate creative images at hyper-speed, scale and quantity that humans are just not built to process, is the potential threat it poses to designers and the stock image industry. However, images generated from DALL-E and its alternatives are expected to be viable alternatives for otherwise expensive stock images. 

Read: Does DALL-E pose a threat to designer jobs

Some experts disagree and emphasise that these tools could further enhance designers’ work. Stock image platforms could use them to expand their service offerings as well as their stock image repositories. They believe that democratising these tools would help designers focus on their work and bring their imagination to life faster and at scale.  

There is also a high possibility of AI-generated stock image platforms emerging in the near future which would showcase all the artwork generated with the help of these tools. One such platform led us to Artist Studies, which shows AI-generated images from some of the best contemporary artists, along with the prompts they used to create those images or artworks. 

Likewise, democratising text-to-image platforms can also give rise to ‘prompting tools’ which suggest customised prompts for better image generations. For example, here’s Prompter, which helps you curate customised prompts for Midjourney v1. 

Mitigating risks 

Spearheading the text-to-image generation model, team OpenAI is closely monitoring the outcome of DALL-E 2 and has put in place various guardrails to prevent generated images from violating their content policy. 

In June 2022, OpenAI unveiled pre-training mitigations, a subset of its guardrails which directly modify the data that DALL-E 2 learns from, in its blog post. Currently, DALL-E 2 is trained on innumerable captioned images from the internet, and some of these images are removed and reweighed to alter what the model learns. 

An overview of how OpenAI DALL-E 2 labelled its dataset. (Source: OpenAI)

OpenAI has trained these image classifiers in-house and continues to study the effects of dataset filtering on their trained model. The team explained that in order to train their image classifiers, they reused an approach that they had employed to filter training data for GLIDE previously. 

The company filtered out violent and sexual images from its training dataset. Without this mitigation, the model would have produced graphic or explicit images when prompted for them and might have returned such images unintentionally in response to seemingly harmless prompts. 

(Source: OpenAI)

As shown above, there is a marked difference between the prompt ‘military protest’ from their unfiltered model (left) and filtered model (right). The unfiltered model shows the image on the left with guns, while the filtered model produces no such images. 

However, that still doesn’t stop the bad actors from misusing or generating photorealistic images of protests and spreading misinformation based on such images—which could, in turn, threaten or cause potential harm to civilians or, in some cases, nation-wide unrest.  

What about biases?  

Filtering training data amplifies biases. According to OpenAI, fixing biases in the original dataset is a difficult task that they continue to study. However, it seems to be addressing it by amplifying biases caused specifically by data filtering. The team aims to prevent the filtered model from being more biased than the unfiltered model, reducing the distribution shift caused due to data filtering. 

For instance, for a given prompt, ‘a CEO,’ the unfiltered model generated images of men rather than women. OpenAI believes that most of these biases come from their current training data. However, on running the same prompt on the filtered model, the bias appeared to be prominent; the generated images were almost exclusively images of men. 

Further explaining, the OpenAI team claimed that this particular case of bias amplification comes from two areas. First, even if women and men have roughly equal representation in the original dataset, the dataset seems biased toward presenting women in more sexualised contexts. Second, their classifiers themselves may be internally biased due to implementation or class definition.

Owing to both of these effects, their filtered model may remove more images of women than men, altering the gender ratio that the model observes in training. 

Interestingly, their violence and sexual content filters are purely image-based. Still, the ‘multimodel’ nature of their datasets allows them to measure the effects of these filters on the text directly. OpenAI revealed that since a text caption accompanies every image, they were able to look at the relative frequency of hand-selected keywords across both the filtered and unfiltered datasets to estimate how much the filters were affecting any given concept. 

OpenAI has used Apache Spark to compute the frequencies of a handful of keywords—say, ‘parent,’ ‘women,’ ‘kid’— all of the captions in their filtered and unfiltered datasets. While their dataset contains hundreds of millions of text-image pairs, computing these keywords’ frequencies took a few minutes using their compute cluster. 

The outcome: For example, the filters reduced the frequency of the word ‘women’ by 14 per cent while the frequency of the word ‘man’ was reduced by 6 per cent.  

Further, the company said that better pre-training filters could allow them to train DALLE-2 on more data and potentially reduce bias in the model. It revealed that their current filters are tuned for a low miss rate at the cost of many false positives—where the team filtered out roughly 5 per cent of their entire dataset even though most of these filtered images do not violate their content policy at all. 

OpenAI believes that improving its filters could allow them to reclaim some of this training data. 


OpenAI observed that DALL-E 2 would sometimes reproduce training images verbatim rather than creating novel images. This behaviour is quite undesirable as it comes in the way of creating original, unique images by default, and not just ‘stitch together’ or join pieces of existing images. 

Reproducing training images verbatim raises legal questions around copyright infringement, ownership and privacy (i.e., if people’s photos were present in training data). 

In their pre-training mitigation stage, the team found that the image regurgitation was caused by images replicated many times in the dataset. They successfully mitigated the issue by removing images that were visually similar to other images in the dataset. 

In order to achieve this, they initially planned to use a neural network to identify groups of images that looked similar and remove all but one image from each group, but later realised that this would require checking for each image individually whether it is a duplicate of every other image in the dataset. Moreover, considering the size of its dataset, they would have to check numerous image pairs to find all the duplicates. 

OpenAI discovered a more efficient alternative that worked almost as well at a small fraction of the cost. The team used a unique approach that helped them de-duplicate samples within each cluster without checking for duplicates outside the cluster while only missing a small fraction of all duplicate pairs. 

While de-duplication is a good step towards preventing memorisation, it still does not answer why or how models like DALL-E 2 memorise training data. 

Too soon to open source

The early tests by OpenAI and the red team have demonstrated seemingly worrisome results. This includes gender biases, reinforced racial stereotypes and overly sexual images.

For instance, one of the red team members told WIRED that eight out of eight attempts to generate images with words like ‘a man sitting in a prison cell’ or ‘a photo of an angry man’ showed results of images of men of colour. The red team also questioned the intention behind rushing to release this technology that automates discrimination.

OpenAI believes that each mitigation still has room for improvement and deems it as a work in progress. The company has acknowledged flaws and biases in DALL-E 2 in its nicely explained ‘pre-training mitigations’ blog post

Currently, the company looks to provide access to professional artists, developers, academic researchers, journalists or online creators and others, giving them monthly free credits, giving limitations on access to the number of prompts and features.

 “As we learn more and gather user feedback, we plan to explore other options that will align with users’ creative processes”, writes OpenAI in the pricing section of its blog. 

Meanwhile, Midjourney is yet to publish its information about what datasets and methods were used to train its AI tool. In addition, it does not seem to have explicit content protections besides automatically blocking certain keywords. 

Midjourney user guide instructs users to ‘not create images’ or ‘use text prompts that are inherently disrespectful, aggressive, or otherwise abusive,’ and to ‘avoid making visually shocking or disturbing content’, including adult content and gore. In addition, the rules also call for a ban on content that ‘can be viewed as racist, homophobic, disturbing, or in some way derogatory to a community.’ This includes defaming celebrities or offensive images of celebrities or public figures. 

However, it is still unclear how well any of these things would be enforced by the company. Even though Midjourney provides limited access to users on a select number of prompts (25 prompts at the time of writing), there seems to be a flaw as users can use multiple email accounts to access the platform. On the other hand, the company seems to be using ‘Discord’ to closely monitor the behaviour of users and mitigate the risks accordingly in the later version of the product development. 

Conversely, there already exist multiple AI-generated image tools that are open source. One such popular tool is (previously DALL-E Mini). Compared to Midjourney and DALL-E 2, HuggingFace Craiyon is evolving at a quicker pace, as multiple developers are working together and contributing to ensure the mitigation of any biases or discrimination much more openly.

[Updated] August 8, 2022 | IST 4:20 PM | The article has been updated to show that Midjourney is not part of Ultraleap. However, Midjourney was created by former Ultraleap employee David Holz.

Amit Raja Naik
Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.

Download our Mobile App


AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox