4 min readNew DelhiFeb 25, 2026 04:27 PM IST Anthropic has revised its safety policies to better align with the current global regulatory environment that prioritises AI competitiveness and growth.
In an updated version of its Responsible Scaling Policy (RSP), a voluntary framework used by Anthropic to address catastrophic risks from AI systems, the Claude maker said that it would not stop developing an AI model classified as dangerous if a comparable or superior model had already been released by a competitor.
This is a shift from its RSP two years ago, stating that Anthropic would delay AI development that might be dangerous. The change in its safety policy is due to the speed of AI development and lack of consensus on AI regulations at the federal level, Anthropic said in a blog post on Tuesday, February 24.
The updated policy marks a dramatic shift given that Anthropic has been frequently labelled as one of the most safety-conscious players in the AI space. However, the AI startup has also come under intense competition from rivals such as OpenAI, Elon Musk’s xAI and Google, which regularly release cutting-edge tools.
“We hoped that announcing our RSP would encourage other AI companies to introduce similar policies […] Over time, we hoped RSPs, or similar policies, would become voluntary industry standards or go on to inform AI laws aimed at encouraging safety and transparency in AI model development,” Anthropic said.
Based on its assessment of the previous RSPs, it added that “some parts of this theory of change have played out as we hoped, but others have not.”
What does the new policy state?
Anthropic highlighted three key new changes made to its RSP. Firstly, the company plans to separate the AI risk mitigations it wants to pursue from the overall AI safety recommendations made to the industry and regulators around the globe. Secondly, Anthropic’s new RSP introduces a requirement to develop and publish a Frontier Safety Roadmap, which will describe its plans for risk mitigations across areas of security, alignment, safeguards, and policy.Story continues below this ad
Finally, the AI startup said that its Risk Reports will be subject to external review from third-party actors who are “deeply familiar with AI safety research, are incentivised to be open and honest about Anthropic’s safety position, and are free of major conflicts of interest.”
Anthropic versus US Pentagon
The revised RSP comes in the backdrop of escalating tensions between Anthropic and the US Department of Defense over restrictions on how its Claude tools are used for military purposes.
Anthropic has said that its policies do not allow its AI tools to be used for domestic surveillance or autonomous lethal activities. However, US Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday, February 24, that the company has until the end of this week to relax its usage policies.
The AI startup has also batted for regulations around model transparency and guardrails at the state and federal level. This is in contrast to the position of the Trump administration which has sought to curb states’ ability to regulate AI.
