It looked like a ceremonial gesture. It was actually the starting gun for India’s most powerful AI supercomputing project — and a glimpse at how geopolitics and compute are becoming inseparable
Diplomatic gifts are usually symbolic. A cultural artefact. A piece of art. Something that photographs well and means little beyond the photo opportunity.
What UAE President Sheikh Mohamed bin Zayed Al Nahyan handed to Prime Minister Narendra Modi in Abu Dhabi on May 15 was different. It was a Cerebras wafer-scale chip — and it wasn’t a souvenir. It was a signal.
The exchange marked the formal execution of Condor Galaxy India, an ambitious 8-exaflop AI supercomputing partnership between India and Abu Dhabi-based tech major G42. The physical chip handed to PM Modi is the foundational building block of this infrastructure.
The headline from Modi’s UAE visit was a $5 billion investment commitment from the Gulf nation. The chip was the quieter, more consequential story.
What Just Got Formalised?
G42 and the Government of India have formalised the framework and commercial terms for the deployment of Condor Galaxy India — an 8-exaflop AI supercomputing cluster comprising 64 Cerebras CS-3 systems. The exchange was witnessed by UAE President Sheikh Mohamed bin Zayed Al Nahyan and PM Modi during the latter’s official state visit to Abu Dhabi.
The system will be built in partnership with the Mohamed Bin Zayed University of Artificial Intelligence and India’s Centre for Development of Advanced Computing (C-DAC). Under the framework, G42 will be responsible for the installation, deployment, operations, and maintenance of the system.
This isn’t a new idea — it’s been in the works since January 2026, when the two leaders first announced the partnership during Al Nahyan’s visit to India. What happened in Abu Dhabi this week is the commercial framework being locked in. Work can now begin in earnest.
The Chip That Makes It Possible
To understand why this matters, you need to understand what a Cerebras chip actually is — because it’s genuinely unlike anything else in the market.
Traditional GPUs — the workhorses of modern AI — are manufactured by printing hundreds of small chips onto a large silicon wafer, which is then cut into individual units. Cerebras threw that playbook out entirely.
Cerebras Systems builds a single massive processor using an entire silicon wafer, an architecture it calls the Wafer-Scale Engine (WSE). This chip houses over 4 trillion transistors and close to 1 million AI-optimised cores on a single piece of silicon.
Each of Cerebras’ 23 kW CS-3 systems features 44 GB of SRAM capable of 21 petabytes per second of memory bandwidth — roughly 1,000x faster than the HBM4 on Nvidia’s Rubin GPUs. By keeping the entire processing network on one continuous wafer, Cerebras dramatically reduces data communication latency — delivering up to 20X faster AI training and inference than conventional GPU-based alternatives.
The manufacturing challenge this creates is immense: a single speck of dust during production could create a defect that kills the entire chip. Cerebras solved this by building thousands of spare cores directly into the silicon, allowing the chip to automatically reroute data around any flaws — a remarkable piece of engineering that took years to crack.
Founded in 2015 by Andrew Feldman and a team of semiconductor veterans, Cerebras recently completed one of the most significant IPOs in the AI sector, listing on the Nasdaq under the ticker CBRS, reflecting strong market confidence in the AI infrastructure space.
What 8 Exaflops Actually Means for India ?
Numbers like “8 exaflops” are easy to gloss over. Here’s what they mean in practice.
India’s current flagship AI supercomputer, AIRAWAT, hosted at C-DAC in Pune, has a peak AI compute capacity of around 200 AI petaflops and is presently ranked 188th in the world. Combined with PARAM Siddhi-AI, the country’s two flagship systems together deliver around 410 petaflops of peak compute.
Condor Galaxy India’s proposed 8,000 petaflop capacity is nearly 19 times that combined figure. In one deployment, India’s sovereign AI compute capacity leaps from a respectable mid-table position to something approaching the frontier.
G42 is one of Cerebras’ largest backers, having previously financed the Condor Galaxy deployment effort in the United States at an estimated cost of $900 million. The India deployment extends that proven infrastructure into what both partners describe as one of the world’s most consequential emerging markets.
Why This Goes Beyond Compute ?
The strategic implications run deeper than processing speed.
For India, this is fundamentally about data sovereignty. By hosting the infrastructure domestically — rather than routing AI workloads through foreign cloud providers — India retains control over where its data sits, how it’s processed, and who can access it. For a country building out AI applications across healthcare, defence, agriculture, and governance, that control is not a minor footnote.
G42 CEO of India Manu Jain framed it directly: “Sovereign AI infrastructure is becoming essential for national competitiveness. This project brings that capability to India at a national scale, enabling local researchers, innovators, and enterprises to become AI-native while maintaining full data sovereignty and security.”
For Indian startups and researchers, the practical benefit is access. Frontier-scale compute has historically been the preserve of large corporations and well-funded research institutions. A nationally accessible supercomputing cluster — with affordable access built in by design — changes the landscape for the next generation of AI builders.
The applications being discussed span drug discovery, satellite data processing for disaster management, smart energy grid simulation, genomics research, and geospatial analytics. Each of these requires the kind of compute that, until now, India has had to source externally.
The Geopolitical Backdrop
The deal is a natural extension of a 2024 digital infrastructure memorandum of understanding between the UAE and India. It sits within a broader deepening of bilateral ties that now spans defence, technology, space, and energy — and reflects a growing pattern of nations treating AI infrastructure as a strategic asset rather than a commercial one.
That the UAE is the partner in this endeavour — and that it’s deploying G42, its most prominent tech vehicle, to deliver it — speaks to Abu Dhabi’s ambition to position itself as the compute backbone for the Global South. G42 has already released NANDA 87B, an open-source Hindi-English large language model, signalling genuine commitment to building AI that works for Indian users in Indian languages.
Cerebras CEO Andrew Feldman captured the moment on social media after the chip was presented: the company he founded to solve one of semiconductor history’s hardest problems had just become the centrepiece of a state-level diplomatic exchange between two of the world’s most consequential nations.
That’s not a bad outcome for a startup that spent its first few years convincing the world that putting an entire silicon wafer on a single chip was actually a good idea.







