Google recently concluded the MLPerf v1.1 Training round, where it submitted two large language model benchmarks into the Open division, one with 480 billion parameters and a second with 200 billion parameters. These submissions make use of publicly available infrastructure, including Cloud TPU v4 Pod slices and the Lingvo open-source modelling framework.
Training models traditionally at these scales would require building a supercomputer at the cost of tens or even hundreds of millions of dollars – something only a few companies can afford to do. Customers can achieve the same results using exaflop-scale Cloud TPU v4 Pods without incurring the costs of installing and maintaining an on-premise system.
Google’s Open division submissions consist of a 480 billion parameter dense Transformer-based encoder-only benchmark using TensorFlow and a 200 billion-parameter JAX benchmark. These models are architecturally similar to MLPerf’s BERT model but with larger dimensions and number of layers.

These submissions demonstrate large model scalability and high performance on TPUs across two distinct frameworks. Notably, with their stacked transformer architecture, these benchmarks are fairly comparable in terms of their compute characteristics with other large language models.
The two submissions were benchmarked on 2048-chip and 1024-chip TPU v4 Pod slices, respectively. Google was able to achieve an end-to-end training time of ~55 hours for the 480B parameter model and ~40 hours for the 200B parameter model. Each of these runs achieved a computational efficiency of 63% – calculated as a fraction of floating-point operations of the model together with compiler rematerialization over the peak FLOPs of the system used.

Achieving these impressive results required a combination of several cutting edge technologies. First, each TPU v4 chip provides more than 2X the compute power of a TPU v3 chip – up to 275 peak TFLOPS. Second, 4,096 TPU v4 chips are networked together into a Cloud TPU v4 Pod by an ultra-fast interconnect that provides 10x the bandwidth per chip at scale compared to typical GPU-based large scale training systems.
Large models are very communication intensive: local computation often depends on results from the remote computation that are communicated across the network. TPU v4’s ultra-fast interconnect has an outsized impact on the computational efficiency of large models by eliminating latency and congestion in the network. Google’s submissions represent an important class of models that have become increasingly important in ML research and production but are currently not represented in MLPerf’s Closed division benchmark suite.