With numerous nations across the Middle East and beyond scrambling to establish their own sovereign AI ecosystems, investing billions in infrastructure and cutting-edge models, one critical question surfaces – why do so many of these mega projects never reach production?Ruchir Puri, Chief Scientist and Vice President at IBM Research, in a fireside chat with Mike Butcher, founder of Pathfounders, shared three fundamental organisational issues that most governments continue to underestimate.Speaking at the session, Puri identified what he calls the “most often observed failure modes” in sovereign AI deployments. First and foremost is the lack of organisation around data. “Your data is in very diverse environments, very diverse formats,” Puri explained. This seemingly simple problem turns adverse when governments attempt to deploy AI systems at scale. Without proper data governance and standardisation, even the most sophisticated AI infrastructure is rendered futile.
The second failure mode, according to the scientist, is a mismatch between expectations and capabilities. “The expectations are here, the delivery is there,” Puri noted, emphasising the need for projects that are “not too big, not too small – targeted the right way.” This disconnect often germinates from unrealistic promises about what AI can deliver, coupled with insufficient understanding of the actual challenges involved in implementation.

Perhaps most critically, Puri highlighted what he calls “friction in skills” and the underappreciated aspect of culture. “Culture is one of the most important aspects of rolling out new technologies,” he stressed. “You need to bring your workforce along with you. You cannot just shove something down their throat. It doesn’t work like that.” This cultural resistance, combined with skills gaps, creates a harmful environment where even well-funded projects stall.
To understand why these failures matter so acutely, it’s important to comprehend what sovereign AI actually means. Puri defined Sovereign AI as “controlling your future – from infrastructure all the way up to your applications.” This comprehensive approach encompasses security, compliance, and governance, creating what he calls a “control plane that allows you to control your destiny.”
For countries like Qatar, the UAE, and Saudi Arabia, which are pouring multi-billion-dollar investments into sovereign AI systems, the stakes couldn’t be higher. However, pumping billions at the problem isn’t enough if the fundamental organisational issues remain unaddressed.Story continues below this ad
Puri’s solution framework centres on what he calls the shift from “hybrid cloud to hybrid AI. Just as the cloud revolution eventually moved away from a public-cloud-only model to embrace on-premise and hybrid solutions, AI will follow the same trajectory. “You will have some of the frontier models, you will have some of the local models that you are running, and there are some models that you’ll be running on your device as well,” Puri explained.
This approach requires relying on open ecosystems rather than closed, proprietary systems. “One thing that is critically important to automation and AI is actually trust in AI,” Puri emphasised. “And trust comes from knowing what capabilities you are running.” Closed systems, by their nature, undermine this trust and make true sovereignty impossible.
The infrastructure stack for sovereign AI rests on three fundamental layers: infrastructure itself, data sovereignty, and the model layer. Of these, Puri argues that data sovereignty remains the most critical and most underappreciated. “Not having control of your data is a disaster for AI, and anybody who has tried to build an enterprise-grade AI system would agree with me on this,” he stated emphatically.

In a region where energy is abundant but efficiency matters, Puri made a provocative comparison between what he calls “real intelligence” and artificial general intelligence. The human brain, he noted, operates in 1,200 cubic centimetres and consumes just 20 watts—the energy of an LED bulb. In contrast, a single Nvidia Blackwell B100 GPU consumes 1,200 watts, representing a greater extent of energy consumption.Story continues below this ad
“You don’t need a frontier model to write an email,” Puri argued. “You don’t need a frontier model to just chat. Yes, you need it for deep reasoning and research. But 95 per cent of the tasks in the world can be done with much higher energy efficiency.” This insight challenges the prevailing assumption that bigger is always better in AI deployment.
When asked for concrete solutions to overcome these failure modes, Puri offered specific, actionable advice: “Watch out for people who are open-minded in your organisation. Create the right-sized task and give it to them and empower them, and then watch the fun happen.”
This human-in-the-loop approach recognises that technological transformation is ultimately a people problem. Rather than attempting organisation-wide rollouts that create resistance, governments should identify champions, give them properly scoped projects, and let success stories build momentum organically.
Puri also challenged the notion that access to frontier models is essential for every task. “Whatever is frontier today in these frontier models will be available in a smaller, open model nine months from now,” he observed. This suggests that patience and strategic timing can be as valuable as racing to deploy the latest models.Story continues below this ad
The author is at the AI Everything Event in Cairo, Egypt at the invitation of GITEX Global. The event is being organised by GITEX and hosted by Egypt’s Ministry of Communications and Information Technology (MCIT) in partnership with the Information Technology Industry Development Agency (ITIDA).



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *