MENU
OFF-ART Home

You Have 5 unread Messages

Pentagon Says Anthropic Could Sabotage AI During War

Pentagon Says Anthropic Could Sabotage AI During War

The Pentagon is raising concerns that AI company Anthropic could potentially manipulate or sabotage its AI tools if the military uses them during wartime. Defense officials worry the company behind Claude AI might have hidden ways to interfere with their systems.

This touches on a bigger fear in military circles: what happens when you rely on AI tools made by private companies that might not always be on your side? The concern isn’t just theoretical – it’s about whether these companies built secret switches into their AI that could be flipped during conflicts.

Company Fights Back Hard

Anthropic’s executives are pushing back against these allegations, calling them completely unfounded. They argue it’s technically impossible for them to remotely manipulate their AI models once they’re deployed by the military. The company says the Pentagon’s concerns show a fundamental misunderstanding of how their AI actually works.

The dispute highlights a growing tension between Silicon Valley AI companies and government agencies. While the military wants to use cutting-edge AI tools, they’re increasingly paranoid about depending on private companies that operate by different rules.

This comes as the Defense Department is spending billions on AI contracts with various tech companies, making trust a critical issue. Other AI companies like OpenAI and Google are likely watching this dispute closely, knowing they could face similar accusations.

What Happens Next

Expect more scrutiny of AI companies working with the government. The Pentagon will probably demand more transparency about how these AI systems work and whether companies can access them remotely. This could slow down military AI adoption or force companies to make their systems more transparent.

Originally reported by
Wired
Back to Articles
Scroll to Top