TipsMake
Newest

Why AI cannot be 'retrieved' like a faulty medicine: the challenge of global AI governance.

At a recent AI summit in New Delhi, CEO Sam Altman warned that early versions of super-intelligent artificial intelligence could appear as early as 2028. He also stressed that AI could be weaponized to create new pathogens, and that democratic societies need to act before being overtaken by the very technology they created.

 

These concerns don't just come from Altman alone. Geoffrey Hinton—often referred to as the 'godfather of AI'—has repeatedly warned that creating digital entities more intelligent than humans could become a real existential threat .

Similarly, Mustafa Suleyman, in his book The Coming Wave, argues that when AI combines with synthetic biology , it could enable a single individual to create a deadly pandemic.

 

This is no longer a warning for the distant future. Just last week, a conflict over who controls AI and under what conditions caused a complete breakdown in relations between a tech company and the U.S. War Department.

Why compare AI to the pharmaceutical industry?

When politicians and business leaders try to understand these issues, they often look to the pharmaceutical industry as a model for governance.

Richard Blumenthal—one of the few U.S. lawmakers actively advocating for AI regulation—has suggested that the way the U.S. government regulates the pharmaceutical industry could serve as a model for AI regulation .

This comparison sounds reasonable. The pharmaceutical industry demonstrates that strict licensing and close oversight can control dangerous technologies without stifling innovation.

In fact, many tech companies are unconsciously adopting this logic. They manage AI risks using familiar steps such as pre-deployment testing, phased evaluation, and post-launch monitoring. In other words, the pharmaceutical industry model has become the default governance framework in many AI organizations.

 

But the problem is: it's the wrong framework . And the difference isn't just technical, it's fundamental .

The pharmaceutical management model works effectively because of three basic conditions: high barriers to entry, physical products, and slow development cycles. But AI lacks any of those conditions .

1. Barriers to entry

Bringing a new drug to market can cost around $1.1 billion , according to a 2020 study published in the Journal of the American Medical Association.

Laboratories, clinical trials, and manufacturing plants mean that only a select few companies can participate — and regulators can easily track them.

AI is completely different. A powerful model can be built at a much lower cost, fine-tuned on consumer hardware, and deployed globally from just a laptop.

This means that the subjects managers need to monitor are not just a few companies , but potentially anyone, anywhere .

2. Physical properties of the product

Medicines are physical products . Their production and distribution require raw materials, machinery, and logistics. These factors create checkpoints for regulatory agencies to monitor.

AI is different.

Once a model is released, its weights can be precisely replicated and disseminated globally within minutes . The cost of replication is virtually zero.

 

More importantly, you can't recall software like you recall a faulty drug . Once it's spread on the internet, it's there forever.

Even cloud-only models aren't entirely secure. Recently, Anthropic revealed that three Chinese AI labs—DeepSeek, Moonshot AI, and MiniMax—used 24,000 accounts to generate over 16 million interactions with Claude in order to extract the model's capabilities through 'distillation' techniques.

They don't need to infiltrate the supply chain or build factories. Just APIs and cleverly designed prompts are enough .

There is no equivalent in the pharmaceutical industry.

3. Growth rate

The drug approval process often takes years . Meanwhile, AI evolves at the speed of software development .

The model's capabilities are enhanced not only by the hardware but also by:

  1. new training methods
  2. software update
  3. continuous release

For example, Anthropic released two major versions of Claude in just 10 weeks .

This means that when a pharmaceutical type approval process completes its evaluation of a model, that model is already outdated and replaced with a more robust version.

Why AI cannot be 'retrieved' like a faulty medicine: the challenge of global AI governance. Picture 1

 

Why the 'test-deploy-monitor' model is insufficient.

The pharmaceutical mindset exists not only in government but is also widespread in businesses.

In the pharmaceutical industry, a familiar risk is side effects . You test before release, monitor after release, and recall if something goes wrong.

Many companies are applying similar logic to AI. This sounds responsible. But it actually creates a false sense of security .

Methods such as pre-deployment testing or phased evaluation remain valuable. They help detect errors, establish operational discipline, and demonstrate prudential conduct to the board of directors.

But they only address familiar risks such as product defects or technical errors .

Meanwhile, the risks of AI take a completely different form: they relate to irreversibility, rapid spread, and the potential for abuse .

Unlike defective products, when AI causes serious consequences, you cannot issue a recall .

A new approach to AI risk management.

To address this problem, the authors propose the CARE governance framework .

CARE consists of four steps:

  1. Catastrophize (identify the worst-case scenario).
  2. Assess (assess the level of risk),
  3. Regulate (establish control measures),
  4. Exit (prepare a plan in case control measures fail).

When applied to a business, this framework leads to several key actions.

First, leaders need to identify 'shadow AI' —AI tools that employees are using but are not provided by the company.

Next, identify the irreversible points , such as automated emails sent to customers, AI code fed directly into the production system, or algorithm-based recruitment systems.

Businesses also need tight control over their data . Each AI tool is essentially a two-way data pipeline . Once proprietary data is fed into a third-party system, it's nearly impossible to fully recover it.

In addition, organizations need 'red teams' not only to find technical bugs but also to check for potential abuse .

 

Ultimately, responsibility for AI risks must rest with a specific leader , similar to how the chief financial officer is responsible for financial risks.

For decades, the pharmaceutical-style governance model has been considered one of the most successful governance frameworks: it both protects the public and allows for innovation.

But with AI, that model isn't enough .

At the conference in New Delhi, Sam Altman called for the establishment of an international AI regulatory body similar to the International Atomic Energy Agency.

This is a more realistic view of the nature of AI — a technology that needs oversight mechanisms commensurate with the actual level of risk , rather than models borrowed from entirely other industries.

Business leaders should think in a similar way. The problems that governments are facing internationally are also the problems that exist within businesses.

Therefore, design a management system that is compatible with the technology you are actually using , not the technology you want it to resemble.

Discover more
Isabella Humphrey
Share by Isabella Humphrey
Update 20 March 2026