By Sam Boughedda
Alphabet's (NASDAQ:GOOGL) Google said in a research paper on Tuesday that its newest chips for training AI models are faster and more power-efficient than systems from Nvidia (NASDAQ:NVDA).
Its 4th generation tensor processing unit (TPU), which has been used to train its artificial intelligence models in a supercomputer, is said to be as much as 1.7 times quicker and 1.9 times more power-efficient compared to a system based on Nvidia's A100 chip.
However, Google revealed it did not compare its TPUv4 to Nvidia's flagship H100 chip because it came to the market after the Google chip and was created using newer technology.
In the research paper, Google explained how it put together more than 4,000 of the chips into a supercomputer using its custom-developed optical switches to help connect individual machines.
Nvidia has been one of the companies leading the way in the market for AI chips, but Google is also making a serious push. The company has been investing heavily in AI research and development, and the TPUv4 chips are a sign that Google is aiming to position itself as a leader in the AI chip market.
In a comment to Reuters, Google hinted that it may be working on a new chip to compete with Nvidia's H100, with Google Fellow Norm Jouppi telling the publication that Google has "a healthy pipeline of future chips."