Google wins MLPerf benchmark contest with fastest ML training supercomputer | Google Cloud Blog Table 1: All of these MLPerf submissions trained from scratch in 33 seconds or faster on Google’s new ML supercomputer. 2. Training at scale with TensorFlow, JAX, Lingvo, and XLA. Training complex ML models using thousands of TPU chips required a combination of algorithmic techniques and optimizations in TensorFlow, JAX, Lingvo, and XLA.To provide some background, XLA is the underlying … cloud.google.com |
Google breaks AI performance records in MLPerf with world’s fastest training supercomputer
par
Étiquettes :