Online criminals are increasingly using artificial intelligence to carry out cyberattacks, the U.S.-based AI developer Anthropic, maker of the Claude chatbot, warned in its threat intelligence report on Wednesday.
Claude has been misused to infiltrate networks, steal and analyze data, and craft "psychologically targeted extortion demands," Anthropic said.
In some cases, attackers threatened to release stolen information unless paid more than $500,000, the company said.
In the past month alone, 17 organizations across health care, government and religious sectors were targeted, according to the report. Claude helped identify vulnerabilities and decide which network to target and what data to extract.
Anthropic manager Jacob Klein told tech outlet The Verge that such operations previously required expert teams, but AI now allows a single individual to conduct sophisticated attacks.
Anthropic also documented cases of North Korean operatives using Claude while posing as remote programmers for U.S. companies to "fund North Korea's weapons programs." The AI helped them communicate with employers and perform tasks they lacked the skills to do themselves.
Historically, North Korean workers went through years of training for this, but "Claude and other models have effectively removed this constraint," Anthropic said.
Criminals have also developed AI-assisted fraud schemes for sale online, including a Telegram bot used for romance scams that manipulate victims emotionally in multiple languages to extort money.
While Anthropic has implemented safeguards to prevent abuse, attackers continue to try and find ways around them. Lessons learned from these incidents are being used to strengthen protections against AI-enabled cybercrime.