MITB Banner

Scalable RL, Neural Weather Model And More: Top AI Releases Of The Week

Share

Regardless of what is happening around the world, the AI community is one productive bunch and they have something interesting to share almost every day. Like every week, this week too, Google AI team has come up with interesting releases. From weather forecasting to reinforcement learning to chip design, here is what is new this week:

MetNet: Google’s Neural Weather Model

via Google AI blog

Predicting the weather is one of the most challenging tasks for any time series model. The number of variables that a weather forecast consists of makes it tricky for the models to predict accurately. To address a few challenges, the researchers at Google AI present “MetNet: A Neural Weather Model for Precipitation Forecasting”. 

MetNet is a deep neural network that is capable of predicting future precipitation at 1 km resolution over 2-minute intervals at timescales of up to 8 hours into the future. This model, the researchers claim, has outperformed the current state-of-the-art physics-based model in use by NOAA for prediction times up to 7-8 hours ahead and predicts the entire US in a matter of seconds as opposed to an hour. 

Google’s AI Now Learns Chip Design

via Google TPU

Mirhoseini and senior software engineer Anna Goldie of Google Brain, have come up with a neural network that learns to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beat several weeks-worth of design effort by human experts in terms of power, performance, and area.

Massive Scaling RL With SeedRL

via Google AI blog

Google introduced “SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference”. The researchers present an RL agent that scales to thousands of machines, which enables training at millions of frames per second, and significantly improves computational efficiency.

In this approach, neural network inference is made centrally by the learner on specialized hardware (GPUs or TPUs), enabling accelerated inference and avoiding the data transfer bottleneck by ensuring that the model parameters and states are kept local. 

This makes it possible to achieve up to a million queries per second on a single machine. The learner can be scaled to thousands of cores (up to 2048 on Cloud TPUs) and can be scaled to thousands of machines, making it possible to train at millions of frames per second.

Facebook’s SynSin

via Facebook Research 

A team from Facebook AI research team, has proposed a novel end-to-end model for this task that is trained on real images without any ground-truth 3D information. They have introduced a novel differentiable point cloud renderer that is used to transform a latent 3D point cloud of features into the target view. 

The projected features are decoded by the refinement network to inpaint missing regions and generate a realistic output image. The 3D component inside of this generative model allows for interpretable manipulation of the latent feature space at test time. For example, we can animate trajectories from a single image.

They have released the code that allows for synthesizing new views of a scene given a single image of an unseen scene at test time. It is trained with pairs of views in a self-supervised fashion. It is trained end to end, using GAN techniques and a new differentiable point cloud renderer. At test time, a single image of an unseen scene is input to the model from which new views are generated. 

Quantization Now Available On PyTorch

Quantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating-point implementations. This enables performance gains in several vital areas:

  • 4 times reduction in model size;
  • 2-4 times reduction in memory bandwidth;
  • 2-4 times faster inference.

Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 quantized models are published for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.

This blog post provides more details on how to use it.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.