Now Reading
Hands-On Guide To Image Extrapolation With Boundless-GAN

Hands-On Guide To Image Extrapolation With Boundless-GAN

Suppose that given a sub-image (a part of the original image) of the wild scene, you are asked to draw the entire image beyond its boundary without having the entire image. We humans usually first imagine the entire image according to prior knowledge, then we draw the details outwards from inside progressively based on sub-image and imaginary image. This technique is called extrapolation. 

Image extrapolation is such a task in computer vision that aims to fill the surrounding region of a sub-image, e.g. completing the object appearing in the image or predicting the unseen view from the scene picture. This task is extremely challenging since the extrapolated image must be realistic with reasonable and meaningful context. Moreover, the extrapolated region should be consistent in structure and texture with the original sub-image.

Register for our upcoming Masterclass>>

Recently, extrapolating an image is challenging for humans, but thanks to advanced machines and development in the GAN networks. A lot of effort has been made on this task to step forward, achieving good accuracy. However, existing GAN based models for image extrapolation mainly generate a whole image and paste the given part onto it, making the final image look blurry or jarring. In addition, due to distant contextual generation problems, directly applying inpainting methods tends to generate blurry or repetitive pixels with inconsistent semantics.

Boundless GAN is a network developed by the Google Research team which has done semantic conditioning to the discriminator of GAN and achieved strong results on image extension with coherent semantics and visually pleasing colors and textures. In this article, we will see how we can use this network for our extrapolation task. 

Code Implementation: Image Extrapolation With Boundless-GAN

Import all dependencies:       
# for numeric operations
import numpy as np
# for visualization
import matplotlib.pyplot as plt
# for image handling
from PIL import Image
# to handle tensors 
import tensorflow as tf
# to load the model from hub
import tensorflow_hub as hub
# to handle web address for images
from six.moves.urllib.request import urlopen
Helper Functions:

Helper functions are the user defined functions so that we can make use of these functions for repetitive tasks.

To read images:
def image_read(filename):
  file = None
  if (filename.startswith('http')):
    file = urlopen(filename)
    file =, 'rb')
  pil_image =
  width, height = pil_image.size
  pil_image = pil_image.crop((0,0, width, height))
  pil_image = pil_image.resize((257,257),Image.ANTIALIAS)
  unscaled_image = np.array(pil_image)
  image_np = np.expand_dims(unscaled_image.astype(np.float32)/255.,axis=0)
  return image_np

The above function takes a filename as an argument and checks whether the image is being taken from the web or a local machine; based on that, we will have our image in binary format as a variable file. Next, we are formatting the image to handle it; the model takes 257 x 257 with 3 channels. Additionally, we are cropping and scaling (between 0 and 1) the image into squares to avoid distortion.    

Visualization function:

Here we have used matplotlib for the comparison purpose, comparing the result against the original image.

def comparison(img_orignal, img_masked, img_filled):


  plt.title('Generated',fontsize = 20)
Load the images:

Here well will load sample images from internet but you can also try with your own image but try to avoid image of humans, the model has certain limitation, if you still insist you can try 

#image_path = ''

image_path = ""

#image_path = ""

# image_path = ""

image = image_read(image_path)
Select the model from Tensorflow-Hub:

According to the selected version, there are three versions of models: Half, Quarters and Three Quarters, which take the input image and mask the portion of the image.

model_map = {'Boundless Half' : '',
    'Boundless Quarter' : '', 
    'Boundless Three Quarters' : ''}

model_name = 'Boundless Quarter'
model_handle = model_map[model_name]

print('Loading model {} ({})'.format(model_name,model_handle))
model = hub.load(model_handle)

Let’s take a look at our sampled images;

Doing the inference:

The model has two outputs one is a masked image, and the other is the generated image by the model.

See Also

result = model.signatures['default'](tf.constant((image)))
generated_img = result['default']
masked_img = result['masked_image']

1st sample:

2nd sample:

3rd sample:

4th sample:


We have seen the Novel generative framework Boundless GAN to extrapolate an image. The model takes the image mask by 25%, 50%, and 75% while inferring it maintains the semantic conditioning to the discriminator of GAN, which makes reconstruction of the masked portion nearly to the original portion and above results is the evidence for this statement.


What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top