Active Hackathon

Restore Old Photos Back to Life Using Deep Latent Space Translation

Bringing Old Photos Back to Life
Bringing Old Photos Back to Life

Bringing Old Photos Back to Life” is the deep learning computer vision project created by Ziyu Wan, Bo Zhang, Dongdong Chen, Pan Zhang, Dong Chen, Jing Liao, and Fang Wen

from the University of Hong Kong, Microsoft Research Asia. The main concern of this project is to solve the gap between data and the real old vintage photos, It comes with a new method called triplet domain translation network. More specifically, what researchers did, they trained two variational autoencoders(VAEs) to transform and clean old photos into two latent spaces. And the translation between these latent spaces is learned with synthetic paired data. So that leaned network can generalize well to real photos.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.
  • They designed a global branch with partial non-block targeting to structured defects, like scratch and dust spots.
  • For targeting the unstructured detects like noise and blurriness in photos, they designed another local branch 

These Two branches are fused in the latent space, which leads to improved capability and more accuracy to restore old photos from multiple defects.

Framework 

  • The framework is trained on two VAEs(variational autoencoders)
    • VAE1 is trained for images in real photos r ∈ R and synthetic images x ∈ X, 
    • and VAE2 is trained for clean images y ∈ Y. 
  • With VAEs, images are transformed to compact latent space. Then, the mapping restores the corrupted(blurry, noise, damaged) images to clean ones in the latent space with a partial non-local block.
framework

Implementation

Let’s see the official demo for how to rememorize your 90’s by making your vintage old photos new again, before that we are going to do preliminary steps of installation and setting up the environment.

Straight jump to code: here

Cloning GitHub Repository

First, we are going to clone the official project repo from GitHub. Follow the steps below and make sure your Google Colab runtime type is set to GPU.

!git clone https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life.git photo_restoration

Setting up the Environment and install dependencies

So let’s fill our project directory with pre trained models and checkpoints so we can straight jump to testing part, because training and rebuilding the results can take days. Sio let’s download and install dependencies.

 # pull the syncBN repo
 %cd photo_restoration/Face_Enhancement/models/networks
 !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
 !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
 %cd ../../../
 %cd Global/detection_models
 !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
 !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
 %cd ../../
 # download the landmark detection model
 %cd Face_Detection/
 !wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
 !bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
 %cd ../
 # download the pretrained model
 %cd Face_Enhancement/
 !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
 !unzip checkpoints.zip
 %cd ../
 %cd Global/
 !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
 !unzip checkpoints.zip
 %cd ../
 # install dependencies
 ! pip install -r requirements.txt 

Testing in normal mode

 import io
 import os
 import IPython.display
 import numpy as np
 import PIL.Image
 %cd /content/photo_restoration/
 input_folder = "test_images/old"
 output_folder = "output"
 basepath = os.getcwd()
 input_path = os.path.join(basepath, input_folder)
 output_path = os.path.join(basepath, output_folder)
 os.mkdir(output_path)
 !python run.py --input_folder /content/photo_restoration/test_images/old --output_folder /content/photo_restoration/output/ --GPU 0
 def imshow(a, format='png', jpeg_fallback=True):
     a = np.asarray(a, dtype=np.uint8)
     data = io.BytesIO()
     PIL.Image.fromarray(a).save(data, format)
     im_data = data.getvalue()
     try:
       disp = IPython.display.display(IPython.display.Image(im_data))
     except IOError:
       if jpeg_fallback and format != 'jpeg':
         print(('Warning: image was too large to display in format "{}"; '
               'trying jpeg instead.').format(format))
         return imshow(a, format='jpeg')
       else:
         raise
     return disp
 def make_grid(I1, I2, resize=True):
     I1 = np.asarray(I1)
     H, W = I1.shape[0], I1.shape[1]
     if I1.ndim >= 3:
         I2 = np.asarray(I2.resize((W,H)))
         I_combine = np.zeros((H,W*2,3))
         I_combine[:,:W,:] = I1[:,:,:3]
         I_combine[:,W:,:] = I2[:,:,:3]
     else:
         I2 = np.asarray(I2.resize((W,H)).convert('L'))
         I_combine = np.zeros((H,W*2))
         I_combine[:,:W] = I1[:,:]
         I_combine[:,W:] = I2[:,:]
     I_combine = PIL.Image.fromarray(np.uint8(I_combine))
     W_base = 600
     if resize:
       ratio = W_base / (W*2)
       H_new = int(H * ratio)
       I_combine = I_combine.resize((W_base, H_new), PIL.Image.LANCZOS)
     return I_combine
 # display image in before after table format
 filenames = os.listdir(os.path.join(input_path))
 filenames.sort()
 for filename in filenames:
     print(filename)
     image_original = PIL.Image.open(os.path.join(input_path, filename))
     image_restore = PIL.Image.open(os.path.join(output_path, 'final_output', filename))
     display(make_grid(image_original, image_restore)) 

Restoring old scratchy and grainy photos

 !rm -rf /content/photo_restoration/output/*
 !python run.py --input_folder /content/photo_restoration/test_images/old_w_scratch/ --output_folder /content/photo_restoration/output/ --GPU 0 --with_scratch
 input_folder = "test_images/old_w_scratch"
 output_folder = "output"
 input_path = os.path.join(basepath, input_folder)
 output_path = os.path.join(basepath, output_folder)
 filenames = os.listdir(os.path.join(input_path))
 filenames.sort()
 for filename in filenames:
     print(filename)
     image_original = PIL.Image.open(os.path.join(input_path, filename))
     image_restore = PIL.Image.open(os.path.join(output_path, 'final_output', filename))
     display(make_grid(image_original, image_restore)) 

Restore your own custom image

Let’s just take the below example image of the girl and try to restore it

 from google.colab import files
 import shutil
 upload_path = os.path.join(basepath, "test_images", "upload")
 upload_output_path = os.path.join(basepath, "upload_output")
 if os.path.isdir(upload_output_path):
     shutil.rmtree(upload_output_path)
 if os.path.isdir(upload_path):
     shutil.rmtree(upload_path)
 os.mkdir(upload_output_path)
 os.mkdir(upload_path)
 uploaded = files.upload()
 for filename in uploaded.keys():
     shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, filename))
 !python run.py --input_folder /content/photo_restoration/test_images/upload --output_folder /content/photo_restoration/upload_output --GPU 0 

Output

 filenames_upload = os.listdir(os.path.join(upload_path))
 filenames_upload.sort()
 filenames_upload_output = os.listdir(os.path.join(upload_output_path, "final_output"))
 filenames_upload_output.sort()
 for filename, filename_output in zip(filenames_upload, filenames_upload_output):
     image_original = PIL.Image.open(os.path.join(upload_path, filename))
     image_restore = PIL.Image.open(os.path.join(upload_output_path, "final_output", filename_output))
     display(make_grid(image_original, image_restore))
     print("") 

What else?

In case you just restored your old Black and White vintage images and it’s already looking pretty good! Still, further, you can colorize your images if you want them to look more realistic and we have already covered a great library for that: DeOldify. Because the above-mentioned library is mostly concerned with removing grains, scratched, patches, and that old vintage color overlay but again for coloring them you can try further feeding those outputs through DeOldify.

Read More:

Here are some of the resources related to the above demonstration:

More Great AIM Stories

Mohit Maithani
Mohit is a Data & Technology Enthusiast with good exposure to solving real-world problems in various avenues of IT and Deep learning domain. He believes in solving human's daily problems with the help of technology.

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Council Post: Enabling a Data-Driven culture within BFSI GCCs in India

Data is the key element across all the three tenets of engineering brilliance, customer-centricity and talent strategy and engagement and will continue to help us deliver on our transformation agenda. Our data-driven culture fosters continuous performance improvement to create differentiated experiences and enable growth.

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter

[class^="wpforms-"]
[class^="wpforms-"]