Google and Meta Work Together to Challenge Nvidia in AI Computing: Report
Google is said to be working on a new initiative to make its artificial intelligence chips better at running PyTorch, according to Reuters. The project, reportedly called “TorchTPU,” is designed to challenge Nvidia’s grip on AI computing by making Google’s Tensor Processing Units easier to use for companies whose systems already depend on PyTorch.
The effort sits at the centre of Google’s wider attempt to grow cloud revenue from AI workloads. TPU sales have become a major focus for Google Cloud, which wants to show investors that Alphabet’s heavy AI spending is starting to convert into paying business from external customers.

Google PyTorch TPUs push to weaken Nvidia’s AI dominance
Sources said TorchTPU aims to remove a key obstacle that has held back TPU adoption. Many enterprises use PyTorch as their default AI framework, yet Google’s chips have been tuned around different internal tools. By making TPUs more developer-friendly for PyTorch users, Google hopes to cut the extra engineering work currently needed.
Compared with earlier, limited attempts to support PyTorch on TPUs, TorchTPU is drawing more organisational attention and funding, according to people familiar with the plans. As more companies seek alternatives to Nvidia, some view Google’s software stack as a bottleneck, pressuring Google to align its tools with mainstream developer workflows.
Google PyTorch TPUs strategy and software ecosystem clash
For years, Google encouraged internal teams to rely on Jax, another machine-learning framework, instead of PyTorch. TPU performance tuning has also largely revolved around XLA, software that optimises Jax-based code. This strategy widened the gap between how Google’s engineers use TPUs and how most customers actually build AI models using PyTorch.
The mismatch means many developers cannot simply move PyTorch models onto TPUs and match Nvidia’s performance. Doing so often requires substantial code changes, extra tooling, and specialist skills. That additional work takes time and adds cost for businesses racing to deploy generative AI systems and maintain competitive momentum against rivals.
Enterprise clients have told Google that TPUs are harder to adopt for AI workloads because of this history, the sources said. Many teams prefer to keep using PyTorch instead of shifting projects to Jax, which would involve retraining staff and rewriting model pipelines. TorchTPU is intended to ease those concerns by letting PyTorch users run on TPUs with fewer trade-offs.
Google PyTorch TPUs and PyTorch–CUDA history
PyTorch, first released in 2016 and backed heavily by Meta Platforms, has become one of the most popular tools for building AI models. Instead of writing every instruction for chips from Nvidia, Advanced Micro Devices or Google, most developers rely on PyTorch’s libraries and frameworks to automate many standard AI coding tasks.
Nvidia’s engineers have spent years making sure PyTorch-based software runs efficiently on Nvidia GPUs through CUDA. Some Wall Street analysts see CUDA, which is tightly linked to PyTorch, as Nvidia’s strongest defence against competing chips. That mature ecosystem has made Nvidia hardware the default choice for training and deploying many large AI models.
Google PyTorch TPUs collaboration with Meta and cloud shift
To accelerate TorchTPU, Google is working closely with Meta, which manages PyTorch, according to the sources. The companies have discussed agreements that would give Meta access to more TPUs, as first reported by The Information. Previous offers to Meta were structured as Google-managed services, with Google installing and operating its chips for such customers.
People familiar with the talks said Meta wants software that lets its models run well on TPUs, partly to lower inference costs. Meta also seeks to diversify AI infrastructure beyond Nvidia GPUs, which could provide better bargaining power when negotiating future chip supplies and prices. Meta did not comment on the discussions.
Alphabet once kept most TPUs for internal use, but that approach shifted in 2022. Google Cloud successfully argued to oversee TPU sales, gaining a larger allocation of chips for outside customers. As corporate interest in AI has grown, Google has increased TPU production and targeted more third-party workloads to capture extra cloud spending.
A Google Cloud spokesperson did not discuss TorchTPU details but confirmed the broader strategy of offering customers more hardware options. "We are seeing massive, accelerating demand for both our TPU and GPU infrastructure," the spokesperson said. "Our focus is providing the flexibility and scale developers need, regardless of the hardware they choose to build on."
If TorchTPU delivers on its aims, companies could move PyTorch models onto TPUs with lower switching costs. That might ease dependence on Nvidia’s GPUs, which currently benefit from the tight link between PyTorch and CUDA. For Google, stronger PyTorch support on TPUs would align its cloud AI hardware more closely with how most developers already work.


Click it and Unblock the Notifications








