Regardless of what is happening around the world, the AI community is one productive bunch and they have something interesting to share almost every day. Like every week, this week too, Google AI team has come up with interesting releases. From weather forecasting to reinforcement learning to chip design, here is what is new this week:
MetNet: Google’s Neural Weather Model
Predicting the weather is one of the most challenging tasks for any time series model. The number of variables that a weather forecast consists of makes it tricky for the models to predict accurately. To address a few challenges, the researchers at Google AI present “MetNet: A Neural Weather Model for Precipitation Forecasting”.
MetNet is a deep neural network that is capable of predicting future precipitation at 1 km resolution over 2-minute intervals at timescales of up to 8 hours into the future. This model, the researchers claim, has outperformed the current state-of-the-art physics-based model in use by NOAA for prediction times up to 7-8 hours ahead and predicts the entire US in a matter of seconds as opposed to an hour.
Google’s AI Now Learns Chip Design
Mirhoseini and senior software engineer Anna Goldie of Google Brain, have come up with a neural network that learns to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beat several weeks-worth of design effort by human experts in terms of power, performance, and area.
Massive Scaling RL With SeedRL
Google introduced “SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference”. The researchers present an RL agent that scales to thousands of machines, which enables training at millions of frames per second, and significantly improves computational efficiency.
In this approach, neural network inference is made centrally by the learner on specialized hardware (GPUs or TPUs), enabling accelerated inference and avoiding the data transfer bottleneck by ensuring that the model parameters and states are kept local.
This makes it possible to achieve up to a million queries per second on a single machine. The learner can be scaled to thousands of cores (up to 2048 on Cloud TPUs) and can be scaled to thousands of machines, making it possible to train at millions of frames per second.
A team from Facebook AI research team, has proposed a novel end-to-end model for this task that is trained on real images without any ground-truth 3D information. They have introduced a novel differentiable point cloud renderer that is used to transform a latent 3D point cloud of features into the target view.
The projected features are decoded by the refinement network to inpaint missing regions and generate a realistic output image. The 3D component inside of this generative model allows for interpretable manipulation of the latent feature space at test time. For example, we can animate trajectories from a single image.
They have released the code that allows for synthesizing new views of a scene given a single image of an unseen scene at test time. It is trained with pairs of views in a self-supervised fashion. It is trained end to end, using GAN techniques and a new differentiable point cloud renderer. At test time, a single image of an unseen scene is input to the model from which new views are generated.
Quantization Now Available On PyTorch
Quantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating-point implementations. This enables performance gains in several vital areas:
- 4 times reduction in model size;
- 2-4 times reduction in memory bandwidth;
- 2-4 times faster inference.
Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 quantized models are published for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.
This blog post provides more details on how to use it.