A swarm of bots armed with your credit card information sounds like a glaring-red signal to cancel the card. But a swarm of bots with your credit card information—and permission to buy those jeans you’ve been eyeing? Doesn’t sound so bad.
Yet “shopping” with tools like OpenAI or Perplexity could wreak havoc on companies that already struggle to distinguish between so-called good and bad bots, warns Experian in its 2026 Fraud Forecast, published today. The No. 1 threat to companies, according to the forecast, is “machine-to-machine mayhem” in which cybercriminals blend good bots doing your shopping with bad bots tasked with fraud.
“It’s not enough anymore to say that it’s a bot, so we need to stop this traffic,” said Kathleen Peters, chief innovation officer for fraud and identity at Experian North America. “Now, we need to say, ‘Is it a good bot or is it a malicious bot?’”
The U.S. Federal Trade Commission last year found that consumers lost more than $12.5 billion to fraud, while nearly 60% of companies reported an increase in losses from 2024 to 2025. Strikingly, financial losses ballooned by 25% even as the number of fraud reports held steady at 2.3 million a year, showing that schemes are getting more effective at cheating consumers and companies out of their money.
In a separate survey released in July, Experian reported that 72% of business leaders believe that AI-enabled fraud and deepfakes will be among their top operational challenges this year.
The company predicts this year will be a “tipping point” for AI-enabled fraud that will force conversations about liability and regulation around agentic AI in e-commerce, Peters said. “We want to let the good agents through to provide convenience and efficiency but we need to make sure that doesn’t accidentally become a shortcut for bad actors,” she said.
Some e-commerce companies already block AI agents. Amazon, for example, generally blocks bots from independent third parties from browsing and shopping on its platform, and sued to block Perplexity AI agents from shopping autonomously late last year. The e-commerce giant has publicly stated the move is to protect security and privacy.
Yet Peters warns that retailers will soon need to grapple with how to manage AI bots once consumers give agents permission to shop for them. She notes that retailers will need to confirm that a consumer gave the agent permission, that the agent is faithful to the consumer’s intent, that the agent has permission to buy and not just browse—and that there’s an actual consumer behind the bot, and not another cybercriminal.
Disruption is also on the table. Retailers want direct engagement with customers to recommend products, build loyalty, and gather data. Some—or all—of that could be crippled if an autonomous agent just completes a transaction and then vanishes.
Deepfake employees infiltrate companies
The second greatest threat for the year, according to Experian, are deepfake candidates infiltrating remote workforces. This threat has already materialized: The FBI and Department of Justice issued multiple warnings last year about documented North Korean operatives posing as IT workers to get jobs and send their salaries back to the regime. These fake IT workers use deepfake technology and identity manipulation to get employment at hundreds of U.S. companies.
Experian predicts employment fraud will escalate as improved AI tools allow deepfake candidates to get through interviews more easily. Companies will unwittingly onboard these fake employees and grant them access to internal systems.
Beyond state-backed fraud, Peters said the tight labor market could also spur desperate job seekers to monetize their skills to get a job or to help a candidate get through an interview. Lucrative, fully remote data science jobs with robust salaries usually require technical proficiencies that are gauged in an interview. As deepfake tools improve, it will likely get harder for companies to tell how an interviewee is faring.
“It’s a very competitive job market out there and individuals may offer their services to get through a technical interview,” she said.
Threats on the Horizon
The forecast warns of three other trends expected to ramp up in 2026.
- Smart home devices, including virtual assistants, smart locks, and security systems, will introduce new weaknesses that cybercriminals could exploit including
- Website cloning could overwhelm fraud teams as AI tools make it simpler to replicate legitimate websites for attacks.
- Intelligent bots with high emotional IQ will carry out automated romance scams and family-member-in-need scams with intense sophistication.
Just as companies are looking to increase their efficiency through AI, cybercriminals are getting more efficient. AI tools has “democratized” access to these powerful tools to not just engineers, but fraudsters as well, Peters said. “With less expertise, they’re able to create more convincing scams and more convincing text messages that they can blast out at scale.”













