Hot topics | Coronavirus pandemic

NVIDIA Moves Ampere A100 Gpus To Google Cloud.

NVIDIA Moves Ampere A100 Gpus To Google Cloud.

Just over a month after announcing its latest generation of Ampere A100 GPUs, this week, NVIDIA announced that the Powerhouse processor system is now available in Google Cloud. The A100 Accelerator Optimized VM A2 instance family is designed for huge loads of artificial intelligence and data analysis.

The A100 Accelerator Optimized VM A2 instance family is designed for huge loads of artificial intelligence and data analysis. NVIDIA reports that users can expect significant improvements over previous processing models. In this case, we are talking about a 20-fold increase in productivity. The system reaches maximum values of 19.5 TFLOPS for single-precision performance and 156 TFLOPS for artificial intelligence and high-performance computing applications requiring TensorFloat 32 operations.

The NVIDIA Ampere is the largest 7-nanometer chip ever created. It has 54 billion transistors and offers innovative features such as a multi-instance GPU, automatic mixed-precision, NVLink, which doubles the direct bandwidth of the GPU to the GPU and increases the memory speed to 1.6 TB per second. The accelerator has 6912 CUDA cores and has 40 GB of HBM2 memory.

When describing the Ampere architecture, NVIDIA stated that its improvements provide unparalleled acceleration at any scale.

The new cloud service is currently in alpha mode. The service will be available in five configurations depending on the needs of the business. Configurations range from one to 16 GPUs and from 85 to 1360 GB of RAM.

Google said that businesses can easily connect to Ampere A100 GPUs. Prices have not yet been announced. Google has announced that cloud services will be available to the public after this year. The rapid availability of the new service for cloud operations indicates the growing needs of AI innovators.

NVIDIA claims that the A100 has become available on Google faster than any GPU brand in history.