DeepMind researchers have developed an algorithm to generate paintings using a neural visual grammar system evaluated using a text conditioned dual encoder.
For years, GANs have been applied by researchers and artists to perform multiple tasks, including creating artworks, converting photographs into beautiful paintings, style transfers and more. GAN-generated portrait, ‘Edmond De Belamy,’ was sold for $432,500 IN 2018
“Photography from memory is a strange concept but seems to capture concisely much of the aesthetics of GANs. In the same way as GANs, our work can be seen as artistic appropriation because it invents nothing new in terms of mapping from language to image,” said Chrisantha Fernando, a researcher at DeepMind.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
How does it work?
In a paper called ‘Generative Art Using Neural Visual Grammars and Dual Encoders,’ Fernando, S. M. Ali Eslami, Jean-Baptiste Alayrac, Piotr Mirowski, Dylan Banarse and Simon Osindero explained the process of art-making using the proposed algorithm.
The algorithm allows a user to input a text string. Once entered, as a creative response to this string, the algorithm then outputs an image, which starts interpreting that string consequently.
Further, it evolves the images using a hierarchical neural Lindenmeyer system, alongside evaluating the generated images using an image text dual encoder trained on billions of images and their associated text from the internet.
For instance, the below visuals showcases the images produced by the final hierarchical visual grammar system for the text prompt ‘Jungle in the tiger.’
Similarly, another example of evolved images for the sentences which show (from left-top to right-bottom) ‘a face,’ ‘scream,’ ‘a cat,’ ‘a smiley face,’ ‘a house on fire,’ ‘a person walking,’ ‘a tiger in the jungle,’ and ‘a cave painting, are shown below.
The ALIGN dataset used in this process contains 1.8 billion image-text pairs of captioned images, with minimal filtering of captions. Also, the order of the words and distinctions between uppercase and lowercase influence the final image.
For example, the text strings ‘Jungle in the Tiger’ and ‘a tiger in the jungle’ is expected to have different language embeddings. Below visuals shows images evolved for ‘Tiger in a Jungle’ on several independent runs.
Moreover, it is interesting to note that the evolvable generative neural visual grammar with a multimodal transfer allows the automation of the production of simple abstracted representations based on an image title alone.
French media artist Vera Molnar tweaks a computer programme until the desired effect is achieved. The random generation and search for pleasing images is easier when tweaking simple programs compared to more complex ones.
“This is because, with each added piece of complexity, there is a chance that the pleasing ordered visual Gestalt principle which was most interest to us (and was serendipitous in its appearance) can be confused, lost, or hidden by disorder,” shared researchers. Gestalt principles are rules that describe how a human eye perceives visual pieces.
Made for artists
While the artwork and experiments look surreal, we should also be conscious of biases in the multimodal transformer. For instance, when we ask for a ‘self-portrait,’ most portraits produced are of white males. Similarly, asking for a picture of a nurse or a doctor tends to replicate the biases present in images of these professions on the internet.
However, the researchers believe the artist has the freedom to use the tool to generate multiple designs with just the title of the image until the desired effect/result is achieved. “We believe this is an exciting new process which marks another shift in the relationship of the artist to their work.”