DeepMind Releases Algorithm To Create Mind-Blowing Paintings Just From Text

DeepMind researchers have developed an algorithm to generate paintings using a neural visual grammar system evaluated using a text conditioned dual encoder.

For years, GANs have been applied by researchers and artists to perform multiple tasks, including creating artworks, converting photographs into beautiful paintings, style transfers and more. GAN-generated portrait, ‘Edmond De Belamy,’ was sold for $432,500 IN 2018

“Photography from memory is a strange concept but seems to capture concisely much of the aesthetics of GANs. In the same way as GANs, our work can be seen as artistic appropriation because it invents nothing new in terms of mapping from language to image,” said Chrisantha Fernando, a researcher at DeepMind.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

How does it work? 

In a paper called ‘Generative Art Using Neural Visual Grammars and Dual Encoders,’ Fernando, S. M. Ali Eslami, Jean-Baptiste Alayrac, Piotr Mirowski, Dylan Banarse and Simon Osindero explained the process of art-making using the proposed algorithm.

Architecture of deep grammatical image generator (Source: arXiv) 

The algorithm allows a user to input a text string. Once entered, as a creative response to this string, the algorithm then outputs an image, which starts interpreting that string consequently. 

Further, it evolves the images using a hierarchical neural Lindenmeyer system, alongside evaluating the generated images using an image text dual encoder trained on billions of images and their associated text from the internet. 

For instance, the below visuals showcases the images produced by the final hierarchical visual grammar system for the text prompt ‘Jungle in the tiger.’

Source: arXiv.org

Similarly, another example of evolved images for the sentences which show (from left-top to right-bottom) ‘a face,’ ‘scream,’ ‘a cat,’ ‘a smiley face,’ ‘a house on fire,’ ‘a person walking,’ ‘a tiger in the jungle,’ and ‘a cave painting, are shown below. 

Source: arXiv.org

The ALIGN dataset used in this process contains 1.8 billion image-text pairs of captioned images, with minimal filtering of captions. Also, the order of the words and distinctions between uppercase and lowercase influence the final image.

For example, the text strings ‘Jungle in the Tiger’ and ‘a tiger in the jungle’ is expected to have different language embeddings. Below visuals shows images evolved for ‘Tiger in a Jungle’ on several independent runs.

Moreover, it is interesting to note that the evolvable generative neural visual grammar with a multimodal transfer allows the automation of the production of simple abstracted representations based on an image title alone.

Examples of images evolved for ‘Tiger in a Jungle’ (Source: arXiv.org)

French media artist Vera Molnar tweaks a computer programme until the desired effect is achieved. The random generation and search for pleasing images is easier when tweaking simple programs compared to more complex ones.

“This is because, with each added piece of complexity, there is a chance that the pleasing ordered visual Gestalt principle which was most interest to us (and was serendipitous in its appearance) can be confused, lost, or hidden by disorder,” shared researchers. Gestalt principles are rules that describe how a human eye perceives visual pieces. 

Made for artists 

While the artwork and experiments look surreal, we should also be conscious of biases in the multimodal transformer. For instance, when we ask for a ‘self-portrait,’ most portraits produced are of white males. Similarly, asking for a picture of a nurse or a doctor tends to replicate the biases present in images of these professions on the internet. 

However, the researchers believe the artist has the freedom to use the tool to generate multiple designs with just the title of the image until the desired effect/result is achieved. “We believe this is an exciting new process which marks another shift in the relationship of the artist to their work.”

Amit Raja Naik
Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR