Nvidia wants to retain its virtual monopoly on AI chips…
Nvidia has announced the upcoming arrival of its GH200 super chip, which will be able to handle “the most complex generative AI workloads, spanning large language models, recommendation systems and vector databases”.
The GH200 platform will feature the same GPU as the H100, currently Nvidia’s most powerful and popular AI offering, but will have three times the memory capacity.
Nvidia is keen to maintain its virtual monopoly on AI chips…
Nvidia has announced the upcoming arrival of its GH200 super chip, which will be able to handle “the most complex generative AI workloads, spanning large language models, recommendation systems and vector databases”.
Recap highlights from our special address at #SIGGRAPH2023, including the updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, and updates on @NVIDIAOmniverse with generative #AI. https://t.co/H925H3ROjo
— NVIDIA (@nvidia) August 8, 2023
The GH200 platform will feature the same GPU as the H100, currently Nvidia’s most powerful and popular AI offering, but will have three times the memory capacity.
Nvidia has yet to reveal the price of this GH200 super chip, but for your information, the H100 range currently sells for around $40,000. A bit steep for a chipset?
In reality, complex AI models require ultra-high performance GPUs so that the system can perform the calculations needed to generate text or images. Running large language models requires considerable processing power, and even with today’s Nvidia H100 chipsets, some people have to split the models between different GPUs to get them to work.
Nvidia has a virtual monopoly on AI-capable GPUs.
Most Cloud service providers such as Amazon AWS, Microsoft Azure and Google use Nvidia’s H100 Tensor Core GPUs and offer services to help their customers implement projects using large language models in order to differentiate themselves.
Microsoft and Nvidia have also teamed up to build new supercomputers, although the Redmond giant aims to make its own AI chips. What’s more, Nvidia will also have to contend with competition from AMD, which wants to put its own AI-dedicated GPU into production at the end of the year.