Active Hackathon

Generating High Resolution Images Using Transformers

Taming Transformers

Transformers are known for their long-range interactions with sequential data and are easily adaptable to different tasks, be it Natural Language Processing, Computer Vision or audio. Transformers are free to learn all complex relationships in the given input as they do not contain any inductive bias, unlike Convolution Neural Networks(CNN). This on the one hand increases expressivity but makes it computationally impractical for long sequences or high-quality images. In December 2020, Patrick Esser, Robin Rombach and Björn Ommer, AI researchers from Heidelberg University, Germany, published a paper on combining the Convolutional Neural Network(CNN) and Transformers to overcome the problem of producing high-resolution images: Taming Transformers for High-Resolution Image Synthesis.

To make transformers more efficient, the Taming Transformer method integrates the inductive bias of CNNs with transformers’ expressivity. To produce high-resolution images,  the proposed methods demonstrates:


Sign up for your weekly dose of what's up in emerging technology.
  • They use VQGAN CNNs to effectively learn a codebook of context-rich visual parts.
  • Utilization of transformers to efficiently model their composition within high-resolution images.

The Model Architecture of Taming Transformers

The model architecture uses convolutional neural  VQGAN, which contains encoder-decoder and adversarial training methods to produce codebook (efficient and rich representations) of images. This GAN architecture is used to train the generator to output high-resolution images. Once this GAN training is over, the model architecture then takes only the decoder part as the input to the transformer architecture, a.k.a codebook. This codebook holds the efficient and rich representation of images(instead of pixels) in a compressed form that can be read sequentially. The transformer is then trained with this codebook to predict the next indices’ distribution in the given representation, similar to the autoregressive model, to synthesize the required output.

Source :

Some of the results from Taming Transformer 

Source :

Tasks done by Taming Transformers

  • Image Completion
  • Depth-to-image generation
  • Label-to-image generation
  • Pose-to-human generation
  • Super Resolution

Quick Start with Taming Transformers

Installation & Dependencies

Clone the repository via git and download all the required models and configs.

 !git clone
 %cd taming-transformers
 !mkdir -p logs/2020-11-09T13-31-51_sflckr/checkpoints
 !wget '' -O 'logs/2020-11-09T13-31-51_sflckr/checkpoints/last.ckpt'
 !mkdir logs/2020-11-09T13-31-51_sflckr/configs
 !wget '' -O 'logs/2020-11-09T13-31-51_sflckr/configs/2020-11-09T13-31-51-project.yaml' 

Install all the dependencies via pip.

 %pip install omegaconf==2.0.0 pytorch-lightning==1.0.8
 import sys

Demo – Taming Transformers via pretrained model

  1. Load the model and print the configs.
 from omegaconf import OmegaConf
 config_path = "logs/2020-11-09T13-31-51_sflckr/configs/2020-11-09T13-31-51-project.yaml"
 config = OmegaConf.load(config_path)
 import yaml
  1. Initialize the model.
 from taming.models.cond_transformer import Net2NetTransformer
 model = Net2NetTransformer(**config.model.params) 
  1. Load the checkpoints.
 import torch
 ckpt_path = "logs/2020-11-09T13-31-51_sflckr/checkpoints/last.ckpt"
 sd = torch.load(ckpt_path, map_location="cpu")["state_dict"]
 missing, unexpected = model.load_state_dict(sd, strict=False)
  1. Load the example semantic image and convert it into a tensor. Here we are taking label-to-image generation as an example.
 from PIL import Image
 import numpy as np
 segmentation_path = "data/sflckr_segmentations/norway/25735082181_999927fe5a_b.png"
 segmentation =
 segmentation = np.array(segmentation)
 segmentation = np.eye(182)[segmentation]
 segmentation = torch.tensor(segmentation.transpose(2,0,1)[None]).to(dtype=torch.float32, device=model.device) 
  1. Visualize the segmentation.
 def show_segmentation(s):
   s = s.detach().cpu().numpy().transpose(0,2,3,1)[0,:,:,None,:]
   colorize = np.random.RandomState(1).randn(1,1,s.shape[-1],3)
   colorize = colorize / colorize.sum(axis=2, keepdims=True)
   s = s@colorize
   s = s[...,0,:]
   s = ((s+1.0)*127.5).clip(0,255).astype(np.uint8)
   s = Image.fromarray(s)
  1. Encode the above image tensors so as to get the codebook via VQGAN.
 c_code, c_indices = model.encode_to_c(segmentation)
 print("c_code", c_code.shape, c_code.dtype)
 print("c_indices", c_indices.shape, c_indices.dtype)
 assert c_code.shape[2]*c_code.shape[3] == c_indices.shape[1]
 segmentation_rec = model.cond_stage_model.decode(c_code)
 show_segmentation(torch.softmax(segmentation_rec, dim=1)) 
  1. Take the decoder output to give it as input to the transformer model.
 def show_image(s):
   s = s.detach().cpu().numpy().transpose(0,2,3,1)[0]
   s = ((s+1.0)*127.5).clip(0,255).astype(np.uint8)
   s = Image.fromarray(s)
 codebook_size = config.model.params.first_stage_config.params.embed_dim
 z_indices_shape = c_indices.shape
 z_code_shape = c_code.shape
 z_indices = torch.randint(codebook_size, z_indices_shape, device=model.device)
 x_sample = model.decode_to_img(z_indices, z_code_shape)
  1. The last step is to train the transformer so as to produce the required output. The code snippet is available here. You can check at each step how the transformer is being trained from decoder output to get the final image. The window size for training is 16 X 16.

Important Links

  • Check this link to run streamlit on Colab.
  • Pre-trained models for image generation, depth-to-image generation, super-resolution, etc. are available here.
  • For training the data from scratch, you can refer here.


In this article, we have given an overview of Taming Transformation for High-Resolution Image Synthesis. Instead of using pixels, the model is trained on a codebook from VQGAN whose decoder is then fed to transformer architecture to generate the required results. This post discussed the need, model architecture, results & tasks of the proposed method. It also consists of basic tutorials of using Taming Transformers pre-trained models. The final result shows that this method has outperformed previous state-of-the-art methods based on convolutional architectures.

Official References are available at:

More Great AIM Stories

Aishwarya Verma
A data science enthusiast and a post-graduate in Big Data Analytics. Creative and organized with an analytical bent of mind.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

Council Post: How to Evolve with Changing Workforce

The demand for digital roles is growing rapidly, and scouting for talent is becoming more and more difficult. If organisations do not change their ways to adapt and alter their strategy, it could have a significant business impact.

All Tech Giants: On your Mark, Get Set – Slow!

In September 2021, the FTC published a report on M&As of five top companies in the US that have escaped the antitrust laws. These were Alphabet/Google, Amazon, Apple, Facebook, and Microsoft.

The Digital Transformation Journey of Vedanta

In the current digital ecosystem, the evolving technologies can be seen both as an opportunity to gain new insights as well as a disruption by others, says Vineet Jaiswal, chief digital and technology officer at Vedanta Resources Limited

BlenderBot — Public, Yet Not Too Public

As a footnote, Meta cites access will be granted to academic researchers and people affiliated to government organisations, civil society groups, academia and global industry research labs.