Trump administration officials are encouraging banks to test Anthropic’s new AI model called Mythos, even though the Pentagon just labeled the same company a security risk. The mixed signals show how divided the government is about AI safety.
This creates a bizarre situation where one part of the government says Anthropic is dangerous while another part wants banks to use their technology. Banks handle some of America’s most sensitive financial data, making the contradiction even stranger.
Security Risk or Banking Helper?
Anthropic makes Claude, one of the most popular AI chatbots that competes with ChatGPT. Their new Mythos model is designed specifically for financial services, promising to help banks with customer service and fraud detection.
But the Department of Defense recently put Anthropic on a list of companies that could threaten national security through their supply chains. This usually means the military thinks the company’s technology could somehow benefit foreign enemies or create security holes.
The timing makes this especially awkward. While defense officials worry about Anthropic’s connections and data practices, Trump’s team is actively promoting their banking AI to financial institutions across the country.
What Happens Next
Banks now face a tough choice. They want cutting-edge AI to stay competitive, but they also can’t afford to anger regulators or create security problems. Many will probably wait to see which government position wins out before making any big decisions about using Anthropic’s technology.


