Active Hackathon

What Is TensorFlow Research Cloud?

TensorFlow Research Cloud

Google has not only helped the data science community with open-source libraries like TensorFlow, but has also been foundational to research and development initiatives outside its organizations. To further expedite innovation in the data science field, Google has been providing TensorFlow Research Cloud for creative projects to cater to the needs of intensive resources.

As per Jerome Pesenti, Head of AI at Facebook, every year, the cost of research is increasing ten-fold. And today, it has reached seven figures. Just like any other research, accomplishing a breakthrough in machine learning is a challenging task. In fact, innovation in data science is even more difficult due to the requirement of substantial computational resources. However, researchers can leverage the free resources that Google provides with its Tensorflow Research Cloud to continue the innovation in the domain, while keeping the price in check.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

TensorFlow Research Cloud

Machine learning models require a colossal amount of data to make accurate decisions, thereby requiring exceptional computational power. For this, Google released TensorFlow Research Cloud (TFRC), a cluster of 1,000 Cloud TPUs that can be accessed by any enthusiast who is willing to make new advancements in the data science domain. As per Google, the cloud-based processing capabilities will support a wide range of computational-intensive projects that might not be possible otherwise. 

Here are some of the benefits researchers will get:-

  1. Access to Cloud TPUs to accelerate machine learning models training and evaluation
  2. Up to 180 teraflops of floating-point performance per Cloud TPU
  3. 64 GB of ultra-high-bandwidth memory per Cloud TPU
  4. Familiar TensorFlow programming interfaces

Cloud TPU

Google’s Cloud TPU is a custom-designed machine learning ASIC that powers Google products like Translate, Gmail, Assistant, Search, and more. The TPU offers 100 petaflops of performance in a single pod, which is enough to make any research breakthrough. Since TPU is domain-specific and is built on systolic array architecture, they work better than CPU and GPU for handling neural networks workloads. Google has made TPUs precisely to render massive multiplications and additions for neural networks while consuming less power.

However, to access the resources, you will need approval from Google. A researcher will have to sign up by providing details of the projects, which would help the firm evaluate based on the creativity and potential of the initiative. Google will then choose a few projects and allocate the computation needs accordingly. 

The program by Google is not limited to academia — anyone with an interest in research can apply. With this, Google is trying to encourage a wide range of experts, even those with a non-traditional background, to apply. Besides, one can apply multiple times with different projects.

The idea behind this program is to benefit the open machine learning research community as a whole, and thereby, applicants are expected to release their source-code, publish the work for peer-review, among others.

Google has also released the Cloud TPU Alpha program. However, this is limited to businesses interested in proprietary research and development. This will help companies train models in days – or even hours – instead of weeks. Cloud TPU can be used for processing industrial-scale datasets such as image, videos, and audio, while making live requests in production using large and complex ML models.

Outlook

The program was announced at GoogleI/O in 2017 to drive innovations on Tensorflow-based AI models. And ever since, it has continued to support research due to the increasing demand for computation. OpenAI recently released an analysis on how some of the best AI-based solutions like AplhaGoZero was able to outperform humans in strategy games because they were trained with a massive amount of data with more than 1500 petaflop/s-days compute. The next closest AI agent was AphaZero, which required less than 500 petaflop/s-days. Not all researchers or AI enthusiasts can afford something closer to what these solutions were consuming. Thus, Google’s TensorFlow Research Cloud program is one of the best bets for researchers who want to leave a mark in the data science landscape.

More Great AIM Stories

Rohit Yadav
Rohit is a technology journalist and technophile who likes to communicate the latest trends around cutting-edge technologies in a way that is straightforward to assimilate. In a nutshell, he is deciphering technology. Email: rohit.yadav@analyticsindiamag.com

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM