MITB Banner

Guide To Zooming Slow-Mo: One-Stage Space-Time Video Super-Resolution

Zooming Slow-Mo is a one-stage framework for space-time video super-resolution that directly synthesizes high frame rate, high-res videos without generating the intermediate low-res frames.

Share

Zooming Slow-Mo

Space-time video super-resolution is a computer vision task that aims to increase video resolution both in time and space. It creates a high-resolution slow-motion video from a low frame rate, low-resolution video. It can be divided into two sub-tasks: video frame interpolation (VFI) for generating intermediate video frames and video super-resolution(VSR).  Existing state-of-the-art two-stage methods use large frame-synthesis modules to predict high-resolution frames; this leads to higher computational complexity and can be time-consuming. Furthermore,  frame interpolation and spatial super-resolution sub-tasks are intra-related and carry coupled information that can help speed up both. 

To overcome these problems, Xiaoyu Xiang, Yapeng Tian proposed a new one-stage approach for space-time video super-resolution- Zooming Slow-Mo. Zooming Slow-Mo directly synthesizes a high-resolution slow-motion video from a low frame rate, low-resolution video. It temporally interpolates low-resolution frame features in missing low-resolution video frames utilizing the local temporal contexts via a feature temporal interpolation network. Then a deep reconstruction network is used to generate high-resolution slow-motion video frames.

Architecture & Approach

Zooming Slow-Mo Architecture

The Zooming Slow-Mo framework consists of four main parts: feature extractor, a frame feature temporal interpolation module, deformable ConvLSTM, and a deep frame reconstruction network. 

The feature extraction module consists of one convolution layer followed by five residual blocks. It extracts feature map:  {FL2t−1}n+1t=1 from input video frames. The proposed frame feature interpolation network then uses these feature maps to generate the low-resolution feature maps of the non-existent intermediate frames.

Frame Feature Temporal Interpolation

Frame Feature Temporal Interpolation Network

Given two feature maps: FL1 and FL3, from low-res input video frames: IL1 and IL3, the aim is to synthesize the feature map FL2 of the missing intermediate frame IL2. Existing frame interpolation networks perform temporal interpolation on pixel-wise video frames. This leads to a two-stage STVSR design. In contrast, Zooming Slow-Mo learns a feature temporal interpolation function f(·) to directly synthesize the intermediate feature maps. This interpolation function can be formulated as:

Here T1(·) and T3(·) are two sampling functions, ????1 and ????3 are the corresponding sampling parameters; and H(·) is a blending function for aggregating sampled features. To generate accurate intermediate feature maps, T1(·) needs to capture the forward motion information between FL1 and FL2, and T3(·) needs to capture backward motion information between FL3 and FL2. However, FL2  does not exist. The information flow between FL1 and FL3 is used to approximate the backward and forward motion information to work around this issue.  A linear blending function is used to combine the two sampled feature maps:

Here ???? and ???? are learnable 1 × 1 convolution kernels and is the convolution operator. 

Deformable ConvLSTM

Temporal information is essential in video restoration tasks. Therefore, instead of reconstructing high-res frames from individual feature maps, Zooming Slow-Mo aggregates temporal contexts from neighbouring frames. It employs ConvLSTM, a popular 2D sequence data modelling method,  to perform temporal aggregation. However, ConvLSTM can only capture motion between previous states and the current input feature map with small convolution receptive fields. This greatly limits ConvLSTMs ability to handle large motions. 

Deformable ConvLSTM used in Zooming Slow-Mo

When working with videos with large motions, this leads to a severe temporal mismatch between previous states and the current feature map FLt.  As a result, the reconstructed high-resolution frame IHt suffers from artifacting. To overcome this problem and make better use of the global temporal contexts, a state updating cell with deformable alignment is embedded into ConvLSTM:

gh and gc are used to denote the general functions of several convolution layers, ∆pht and ∆pct are the learned offsets.  hat−1 and cat−1 refer to the current feature map FLt aligned hidden and cell states respectively.In contrast to vanilla ConvLSTM, deformable ConvLSTM enforces the hidden state and cell state to align with the current feature map FLt. In addition to that, the Deformable ConvLSTM is used in a bidirectional manner to maximize the utilization of temporal information.

Frame Reconstruction

A temporally shared synthesis network is used for frame reconstruction; it synthesizes high-resolution frames from individual hidden states ht. The reconstruction network has 40 stacked residual blocks for learning deep features and uses PixelShuffle for sub-pixel upscaling to reconstruct high-res frames. A reconstruction loss function is used to optimize this network:

Here IGTt denotes the t-th ground-truth high-res video frame and ???? is set to 1 × 10−3.

Space-Time Video Super-Resolution using Zooming Slow-Mo

  1. Clone the Zooming Slow-Mo GitHub repository.
git clone --recursive https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020.git
  1. Install OpenCV, PyTorch and other requirements.
pip install -r requirements.txt
  1. Compile the deformable convolutional network V2
 cd $ZOOMING_ROOT/codes/models/modules/DCNv2
 bash make.sh   
  1. Perform space-time video super-resolution using the video_to_zsm.py script.
python codes/video_to_zsm.py --model experiments/pretrained_models/xiang2020zooming.pth --video low-res-vid.mp4 --output low-res-vid.mp4 --N_out 3
A higher resolution slow-motion video synthesized using Zooming Slow-Mo

Code Source

Last Epoch

Zooming Slow-Mo versus existing state-of-the-art two-stage approaches

This article introduced Zooming Slow-Mo, a one-stage framework for space-time video super-resolution that directly synthesizes high frame rate, high-resolution videos without generating the intermediate low-resolution frames. It introduces a deformable feature interpolation network that enables feature-level temporal interpolation. Furthermore, it uses a modified deformable ConvLSTM for aggregating temporal information and handling large motions. Using its one-stage design, Zooming Slow-mo is able to make use of the intra-relatedness between temporal interpolation and spatial super-resolution. It outperforms existing state-of-the-art two-stage approaches not only in terms of effectiveness but also efficiency. 

For a more in-depth understanding of Zooming Slow-Mo, refer to the following resources: 

Share
Picture of Aditya Singh

Aditya Singh

A machine learning enthusiast with a knack for finding patterns. In my free time, I like to delve into the world of non-fiction books and video essays.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.