Pakistan’s AI Policy: A Vision Without Teeth

AI Policy
Pakistan
Governance
A critical analysis of Pakistan’s National AI Policy 2025 — what it gets right, what it leaves out, and what must change.
Author

Zahid Asghar

Published

June 15, 2025

It is heartening to see the document of Pakistan’s National AI Policy as artificial intelligence is not just another technology trend — it’s the foundation of tomorrow’s economy. It should have been a moment of celebration. But as I read through the 40-page document released by the Ministry of IT and Telecommunication, it seems I have a familiar feeling like other policy documents. We love to have grand visions, ambitious targets, and a troubling absence of the machinery needed to achieve them. We can all talk very well on macro things but very poor in delivering things at micro level.

Let me be clear: having an AI policy is far better than having none. The document’s four-pillar architecture shows that someone in Islamabad understands the scope of what we’re dealing with. The policy speaks of training one million professionals, establishing Centers of Excellence, and creating a National AI Fund. These are the right but generic conversations to have. Good intentions alone do not build ecosystems or train algorithms.

The fundamental problem isn’t what the policy says, but what it leaves out. Other countries have enforceable policy. Singapore has clear risk classifications for AI. European Union has binding rules for high-risk case. India has already operational testing standards. But Pakistan policy, in contrast, is mostly aspirations. It is like taking a pencil sketch to a construction site and wondering why the building won’t rise.

The Biggest Gap: Risk Classification

The biggest gap is risk classification. Every serious AI framework in the world recognizes that not all AI is created equal. A video recommendation engine is very different from an AI making medical diagnoses or approving loans. Yet our policy treats them all the same. This is to say, it does not really treat them at all. If we don’t set clear rules and responsibilities, everything will turn messy and public trust will soon erode.

Placing the AI Directorate under the Data Protection Commission is also poorly thought out. It is like asking traffic police to manage aviation. Both involve movement, but the expertise, challenges, and stakes are entirely different. This is not just bureaucratic inefficiency; it is a recipe for stalled decision-making at a time when speed is critical.

The targets read more like a wish list than a serious roadmap. Training one million people in AI sounds impressive, but there is no clarity on who will train them, what curriculum will be used, or how quality will be ensured.

Safety and Evaluation Standards

The most alarming omission is safety and evaluation standards. The message seems to be: “Go ahead and develop AI, deploy it, and we’ll worry about safety later.” This is not just careless — it is dangerous. One bad AI decision in healthcare or banking could destroy public trust for years. Yet the policy does not explain how systems will be tested, who will take responsibility if they fail, or what happens if an algorithm discriminates against people.

We don’t need to reinvent the wheel. Other countries have already done the hard work of figuring out how to regulate AI safely. Singapore’s Model AI Governance Framework is publicly available. The UK’s approach to AI assurance is well-documented. Even Malaysia, hardly a global tech leader, has clearer implementation mechanisms than what we’re proposing. We can customise these documents as AI benefits and perils are universal in nature.

What Pakistan Actually Needs

What Pakistan urgently needs is an actionable policy with a focus on implementation. We need to start small and build gradually, rather than marking grand announcement that we cannot deliver. Instead of promising to train one million people, why not start by properly training one hundred AI safety experts who actually know what they are doing. We need to adopt a risk-based classification system that separates the trivial from the critical. We need mandatory safety evaluations for high-risk AI applications before they’re deployed, not after they’ve caused harm.

The government should also be honest about funding and its current capacity. The National AI Fund is a good idea, but where’s the money? Government officials struggle with basic IT systems, yet we expect them to regulate advanced artificial intelligence. Our legal system is still figuring out cybercrime law, but we want to become leaders in AI governance.

The Choice Ahead

I’m not suggesting we abandon ambition. Pakistan’s young population, its growing IT sector, and its pressing development challenges make it an ideal laboratory for AI innovation. But ambition without mechanism is just daydreaming. We need boring things like procurement standards, incident reporting systems, and compliance frameworks.

The government now faces a choice. It can either treat this AI policy as another opportunity for photo sessions and press releases, or it can roll up its sleeves and build real implementation capacity. That choice will decide whether Pakistan joins the AI race as a player or remains a spectator.

If we stick to the usual pattern of bold announcements followed by weak execution, this AI policy will join the long list of well-intentioned documents that changed nothing. The clock is ticking, and the world will not wait for us to catch up.


The author is a Professor, School of Economics, Quaid-i-Azam University, Islamabad. The views expressed are personal.

Back to top