Anthropic, the AI firm behind the Claude chatbot, known for prioritizing safe technology, seems to be adjusting its safety principles to stay competitive. The company announced a revision to its responsible-scaling policy, a set of voluntary regulations aimed at preventing the creation of potentially harmful AI that could lead to significant cyber threats.
The updated guidelines from Anthropic emphasize the need for a “strong argument that catastrophic risk is contained” during AI development. However, they now allow development to proceed if the company believes it maintains a significant competitive advantage, rather than halting until safety concerns are fully addressed.
The rationale behind this shift, according to the company, is the diminishing emphasis on AI safety in the U.S. compared to its economic prospects. Anthropic highlighted the slow progress of government action on AI safety while the focus has shifted towards enhancing AI competitiveness and fostering economic growth.
Despite its historical commitment to prioritizing safety, Anthropic’s recent adjustment in safety guidelines coincides with the Pentagon’s warning to terminate contracts unless the company’s technology is permitted for all lawful military uses. Anthropic asserts that the change in guidelines is unrelated to the Pentagon’s ultimatum.
Established in 2021 by former OpenAI employees, Anthropic initially aimed to address safety concerns overlooked by its predecessor. CEO Dario Amodei has consistently stressed the potential risks associated with AI advancements and reiterated the company’s unwavering dedication to safety in various interviews.
The company’s recent blog post highlighted the ongoing evolution of its safety practices, emphasizing enhanced transparency and accountability through the regular publication of safety reports and goals. However, Heidy Khlaaf from the AI Now Institute pointed out Anthropic’s historical oversight in preventing harm from current AI applications, such as chatbot errors leading to misuse in fraudulent activities.
As top AI companies like Anthropic, OpenAI, and Google engage in fierce competition and strategic alliances, the U.S. administration’s pro-development stance raises challenges for prioritizing safety. The absence of comprehensive AI regulations in both the U.S. and Canada poses dilemmas for companies striving to balance innovation with safety measures.
Anthropic’s recent safety policy modifications occur amid pressure from the Pentagon to align its technology usage with military requirements. The company’s $200 million deal with the U.S. Department of Defense necessitates adherence to strict usage guidelines, including prohibitions on weapon development using Anthropic’s AI tools.
While Anthropic maintains its stance against the use of its technology in autonomous weapons and mass surveillance systems, the dispute with the Pentagon centers on usage policy rather than scaling principles. CEO Amodei reaffirmed the company’s refusal to compromise on its values, promising a smooth transition to an alternative provider if necessary.
In light of the impending deadline, Anthropic remains resolute in its commitment to ethical AI deployment, prioritizing principles over potential financial gains.