In a significant development concerning the future of artificial intelligence in the United States, over two dozen former defense and intelligence officials, along with academics and tech policy leaders, have united to condemn the Pentagon's decision to classify the AI company Anthropic as a supply chain risk. This coalition, which spans various political affiliations, has expressed deep concern over the implications of such actions for American innovation, the rule of law, and executive power.
The letter addressed to members of Congress emphasizes the urgent need for clear guidelines governing the use of AI, particularly in the contexts of domestic surveillance and autonomous weapons systems—two contentious issues at the heart of the current conflict. Anthropic has maintained strict ethical guidelines and has resisted military pressure to alter these policies, a stance that has provoked strong reactions from Defense Secretary Pete Hegseth and President Donald Trump. They have reportedly attempted to blacklist the company, urging other firms contracted by the government to sever business ties with Anthropic.
The signatories of the letter, which include former CIA Director Michael Hayden, retired Vice Admiral Donald Arthur, and former Deputy Assistant Secretary of Defense Diana Banks Thompson, have labeled the Pentagon's actions as an “inappropriate use of executive authority against Anthropic.” Brad Carson, the president of Americans for Responsible Innovation and a former Under Secretary of the Army, voiced his concerns, stating, “The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent.”
The letter argues that supply chain risk designations should be reserved to protect the U.S. from foreign threats—specifically, companies with ties to adversarial nations such as China or Russia—rather than targeting American innovators who operate transparently and within the legal framework.
Moreover, the signatories assert that the issues of fully autonomous weapons and mass surveillance are not fringe concerns but rather mainstream positions backed by international law. The prohibition on fully autonomous lethal systems aligns with the laws of armed conflict, including principles of distinction and proportionality outlined in the Geneva Conventions. Similarly, the call against mass domestic surveillance is rooted in the Fourth Amendment and is supported by U.S. commitments under the International Covenant on Civil and Political Rights.
The coalition warns that blacklisting an American company like Anthropic could undermine U.S. competitiveness in the technology sector. They argue that such actions create an environment where no serious entrepreneur or investor can thrive, calling into question the nation’s future in AI development. The letter has been sent to members of both the House and Senate Armed Services Committees, including prominent figures like Republican Senator Roger Wicker and Democratic Senator Jack Reed.
As the situation unfolds, the future of Anthropic remains uncertain. While Secretary Hegseth has yet to formally notify the company of its supply chain risk status—aside from a tweet—the latest reports indicate that Anthropic is actively seeking to negotiate a resolution with the Pentagon.
This emerging conflict highlights a larger debate about the role of AI in society and the ethical implications of its deployment in military operations. As technology continues to advance, the balance between innovation, security, and ethical governance will be crucial in shaping the future landscape of artificial intelligence in the United States.
Source: Gizmodo News