Google announced a new A3 supercomputer to train machine learning and artificial intelligence models at the I/O conference in Mountain View, California, on Wednesday.
Built in partnership with Nvidia — a global leader in AI hardware and software that designs and manufactures graphics processing units for various industries — the new technology aims to offer a complete range of GPU options for the training and inference of machine learning models.
GPUs can process various tasks simultaneously, making them useful for machine learning, video editing and gaming applications.
The A3 supercomputers are “purpose-built to train and serve the most demanding AI models that power today’s generative AI and large language model innovation”, the company said.
“A3 GPU VMs [virtual machines] are a step forward for customers developing the most advanced ML models,” said Roy Kim, director, product management at Google Cloud, and Chris Kleban, group product manager at Google Cloud.
“By considerably speeding up the training and inference of ML models, A3 VMs enable businesses to train more complex ML models at a fast speed, creating an opportunity for our customers to build large language models, generative AI and diffusion models to help optimise operations and stay ahead of the competition.”
The global AI market is expected to grow at an annual rate of more than 38 per cent from 2022 to 2030, Grand View Research reported.
AI will be the common theme in the top 10 technology trends in the next few years, and these are expected to accelerate breakthroughs across major economic sectors as well as society, Alibaba Damo Academy — the global research arm of Chinese company Alibaba Group — said in a report last year.
“Google Cloud's A3 VMs … will accelerate training and serving of generative AI applications,” said Ian Buck, vice president of hyperscale and high-performance computing at Nvidia.
“We are proud to continue our work with Google Cloud to help transform enterprises around the world with purpose-built AI infrastructure.”