AI recruiting startup Mercor got hit by hackers who stole company data and are now trying to extort money from the company. The attack happened through a security flaw in an open-source AI tool called LiteLLM that Mercor was using.
This shows how even AI companies aren’t safe from cyberattacks. Mercor helps businesses find job candidates using artificial intelligence, but ironically couldn’t protect itself from old-fashioned hacking.
When AI Tools Become Attack Weapons
The hackers didn’t break into Mercor directly. Instead, they found a weakness in LiteLLM, a popular tool that helps companies manage different AI systems like ChatGPT and Claude. Since Mercor used this tool, the hackers got access to their internal systems through this backdoor.
This type of attack is becoming more common. Hackers target widely-used software tools because breaking into one tool can give them access to hundreds of companies at once. It’s like picking one master key instead of trying to break into every house individually.
The extortion crew that took credit for the attack specializes in stealing data first, then demanding payment to keep it private. They’ve hit other companies this way before.
What Happens Next
Mercor is working with security experts to figure out exactly what data got stolen and which customers might be affected. The company that makes LiteLLM has already fixed the security flaw, but the damage was already done.
This incident highlights a growing problem: as companies rush to add AI features, they’re often using third-party tools without fully understanding the security risks.

