Believing AI content increases risk of online scams: Study
A laptop, a hand and a wallet as the hand attempts to pickpocket a victim. (Shutterstock Photo)

Users who trust AI-generated content are far more likely to fall victim to online scams, facing financial loss, stress and disrupted daily life



Users who believe in AI-generated fake content are up to five times more likely to fall victim to online fraud, according to a recent Visa study across European countries.

The financial services company warned that fake advertisements and AI-generated scam content are spreading rapidly on social media, leaving users increasingly vulnerable.

The research found that victims of digital fraud face an average financial loss of $165, with incidents costing the European economy an estimated $9.5 billion annually. Beyond monetary losses, victims also experience emotional stress, heightened anxiety and reduced productivity. Resolving issues caused by online fraud takes an average of 14 working days, roughly 70% of a typical monthly work schedule.

User behavior increases risk

The study highlights that online behavior plays a critical role in fraud exposure. Users who share content without verifying its accuracy are twice as likely to be targeted compared with those who fact-check first. Common habits – such as skimming headlines, sharing posts without verification, or trusting AI-generated content – create new opportunities for scammers.

About 44% of respondents reported realizing only later that the content they believed to be real was actually AI-generated. One-third admitted they often read only headlines and one in five shared content without confirming its truthfulness.

Impact on online shopping habits

Online fraud is also changing consumer behavior. Approximately 9 million Europeans are estimated to have altered their online shopping habits after falling victim to scams. Among them, 28% reduced online shopping, while 4% stopped entirely.

Artificial intelligence sits at the core of Visa’s strategy to prevent fraud. For 30 years, the company has used AI-powered tools to secure payments, investing $13 billion over the past five years in smart technologies that detect suspicious activity in real time and block fraudulent attempts before reaching users.

Despite technological advancements, one-third of users believe AI-generated content makes detecting fraud on social media more difficult, underscoring that awareness is as critical as technology in combating scams.

Social responsibility in Türkiye

Visa Türkiye, in collaboration with the United Nations Development Program (UNDP) and Habitat, is running the "Safe in the Digital World” project to treat fraud not only as a technical risk but as a societal issue. The initiative emphasizes real-life cases over theoretical lessons, covering social engineering, phishing, impersonation of public officials, and AI-generated fake audio and video.

Samile Mümin, Visa Türkiye’s general manager, said AI transforms business processes and simplifies life, but scammers are increasingly exploiting it to deceive users and undermine trust in online channels. "Distinguishing the fake from the real is harder than ever and the real-world consequences are lost money, time and trust,” Mümin said.

Mümin emphasized that Visa collaborates with industry partners to equip consumers with the knowledge and tools to stay safe. "Over the past five years, we have invested $13 billion in AI-powered platforms to prevent fraud. Our global security tools now block more than $40 billion in attempted fraud annually. For example, during Black Friday 2025, we detected and prevented 144% more fraud attempts worldwide compared with the previous year,” she added.

Security expertise for social good

Through the "Safe in the Digital World” program, Visa is converting decades of security expertise into societal impact. Trainings illustrate real-world scams, including AI-based attacks, with a focus on individuals over 55 who are particularly vulnerable to emotional manipulation. The program promotes the "Stop-Think-Consult” (3D) rule to encourage safer digital practices, Mümin said.

"Fraud is not only a financial threat but a serious societal problem. By addressing real cases and promoting awareness, we aim to build a safer digital future,” she added.