TipsMake
Newest

'Pro-Human AI Declaration': A roadmap for responsible AI development is being called for in the US.

Amidst recent tensions with Anthropic in Washington, which exposed a lack of clear regulations on artificial intelligence, a group of academics and experts from both political parties in the US have proposed something the government has yet to achieve: a framework for developing AI in a responsible manner.

 

The statement, titled the 'Pro-Human AI Declaration,' was finalized before the confrontation between the Pentagon and Anthropic last week. However, for those involved in drafting it, the fact that these two events occurred almost simultaneously was no coincidence.

Max Tegmark, a physicist and AI researcher at MIT – one of the organizers of the initiative – said that in recent months there has been a notable shift in American public opinion. According to him, surveys show that up to 95% of Americans oppose the race to develop superintelligence without regulation .

The recently published document bears the signatures of hundreds of experts, former officials, and public figures. The opening section of the statement emphasizes that humanity stands at a critical juncture. One path – known as the 'replacement race' – could lead to a scenario where humans are gradually replaced, first in labor roles and then in decision-making, as power concentrates in organizations and uncontrollable machine systems. The other path leans towards developing AI to expand human capabilities rather than replace them.

According to the authors, pursuing the second approach requires adherence to five core principles: ensuring humans retain control, preventing excessive concentration of power, protecting human experiences, maintaining individual freedoms, and holding AI companies legally accountable.

 

The statement also put forward some stronger proposals. These included banning the development of superintelligence until the scientific community reaches a consensus that the technology can be deployed safely and with democratic consent. Additionally, powerful AI systems must have mandatory emergency shutdown mechanisms and must not be built with an architecture that allows for self-replication, self-improvement, or resistance to shutdown.

The timing of this document's release also coincided with a period that made the urgency of the issue even clearer. On the last Friday of February, U.S. Defense Secretary Pete Hegseth designated Anthropic – a company whose AI systems are used on classified military platforms – as a 'supply chain risk' after the company refused to grant the Pentagon unlimited access to its technology. Such a warning label was previously reserved only for companies with ties to China.

Just hours later, OpenAI signed a separate agreement with the US Department of Defense. However, according to many legal experts, enforcing this agreement in practice could be very difficult. The whole affair illustrates the cost of the US Congress's delay in enacting legislation to regulate AI.

'Pro-Human AI Declaration': A roadmap for responsible AI development is being called for in the US. Picture 1

Dean Ball, a senior research fellow at the Foundation for American Innovation, told The New York Times that this is not simply a contract dispute. According to him, it is essentially America's first discussion about control over AI systems .

During the conversation, Tegmark offered an easy-to-understand analogy. He argued that the public shouldn't worry about a pharmaceutical company releasing a dangerous drug before proving its safety, because the U.S. Food and Drug Administration (FDA) wouldn't allow it. He suggested that AI should be regulated in a similar way.

 

Power struggles in Washington rarely generate enough social pressure to change the law. Tegmark argues that child safety could be the factor that breaks the current deadlock. Therefore, the statement calls for mandatory vetting before deploying AI products , especially chatbots or companion apps aimed at young users.

These tests need to assess risks such as increasing suicidal thoughts, exacerbating mental health problems, or manipulating users' emotions.

Tegmark gives an example: if a man impersonates a teenage girl and texts an 11-year-old boy, attempting to incite him to commit suicide, that man could face criminal prosecution. According to him, the law already has clear provisions for this behavior. Therefore, he questions: why would it be different if the same act were carried out by a machine system?

He believes that if the pre-release review principle were applied to products intended for children, its scope would soon expand. Society could then impose additional requirements, such as checking whether AI could be exploited to aid terrorism in the creation of biological weapons, or ensuring that a super-intelligent system is incapable of posing a threat to the government.

Notably, the statement received support from figures with vastly different political views. Among the signatories were former Donald Trump advisor Steve Bannon, former National Security Advisor under Barack Obama Susan Rice, former Chairman of the Joint Chiefs of Staff Mike Mullen, and numerous progressive religious leaders.

Kareem Winters
Share by Kareem Winters
Update 16 March 2026