We have all done it. We’ve all seen a video on the internet, maybe a cute video of a cat or an intense natural disaster, and thought, even if for a second, that what we were viewing was real. While a video of a bear jumping on a trampoline is largely harmless, it is becoming increasingly more difficult to decipher if what we see is real or not.
As AI technologies improve, so do deepfakes. From Instagram videos of Mark Zuckerberg six years ago to viral Tom Cruise deepfakes, the technology is consistently becoming more and more realistic, with deepfake schemes now evolving from targeting your sweet grandma to conning businesspeople out of millions.
Deepfake Fraud: A Business Model
In early 2024, a finance employee at a multinational firm fell victim to a deepfake scheme that resulted in a $25 million loss for the company. According to Hong Kong police, the schemer had invited the employee to a video conference with other staff members. In the conference, the company’s chief financial officer appeared to ask the employee to wire him 200 million Hong Kong dollars (roughly $25.6 million). As it turned out, everyone in that video conference was fake.
According to the AI Incident Database, deepfake fraud is “now a default business model.“ From November 2025 to the end of January 2026, the database added over 108 deepfake incidents, a handful of which had occurred during that time period.
Many of the incidents involved investment scams. In Sweden, approximately 5,000 small-time investors lost 500 million SEK (approximately $52.5 million). Deepfake investment videos featuring prominent Swedish financial figures, such as investor Günter Mårder and journalist Gabriel Mellqvist, circulated across the Metaverse.
Similar schemes have been circulating across Meta platforms, like Facebook and Instagram, for a while now. In 2024, the company estimated that 10% of its annual revenue, equivalent to around $16 billion, came from scam advertisements and banned goods. According to internal documents seen by Reuters, the platform runs approximately 15 billion scam ads a day.
Fred Heiding, a Harvard researcher specializing in AI-based scams, said:
“The scale is changing. It’s becoming so cheap, almost anyone can use it now. The models are getting really good – they’re becoming much faster than most experts think.”
AI tools used to create deepfakes are widely available online, and deepfake scams are increasing by the day.
Deepfakes and Grok
When looking at a catalog of deepfake incidents, Elon Musk’s xAI model, Grok, takes center stage. The first mention of Grok in the AI Incident Database occurred on December 15, 2025.
Following the Bondi Beach shooting in Australia, Grok technology was used to release a false report claiming that the hero bystander who disarmed the shooter was named Edward Crabtree. The real civilian hero was Ahmed al Ahmed.

Another incident of false identity occurred on January 7, 2026, when Grok “unmasked” the U.S. Immigration and Customs Enforcement (ICE) officer who shot Renee Nicole Good. The generated face was released with the name “Steve Grove,” the same name of the Minnesota Star Tribune CEO. The publication swiftly clarified that Grove was not the ICE shooter in question, and it was later revealed that the shooter’s name was Jonathan Ross.
Grok’s biggest controversy began at the end of 2025 when the AI model started distributing inappropriate images of women, many of which were minors. A report from the Center for Countering Digital Hate estimated that, over 11 days, Grok generated around three million sexualized images, of which 23,000 were of children.
The nonconsensual images promptly flooded the internet with controversy. The UK and the EU launched formal investigations into xAI. The governments of Malaysia and Indonesia blocked the AI model completely.
Since December 2023, Musk’s platform X has been under scrutiny in the EU for its digital content rules. Grok’s deepfakes brought the platform back into the spotlight. European Commission President Ursula von der Leyen said that Europe will not “tolerate unthinkable behaviour, such as digital undressing of women and children.”
In the United States, California’s Attorney General filed a cease and desist order against xAI. The California Department of Justice claims that xAI bypassed regulations in order to fuel engagement on X, giving Grok its “Spicy Mode” undressing capabilities.
Despite pushback and new curbs to the model’s generation capabilities, Grok still may be producing sexualized images. Between January 27 and 28, nine Reuters reporters ran prompts through Grok, receiving sexualized images 29 times out of 43. Previous prompts, run between January 14 and 16, produced sexualized images 45 times out of 55. When they ran the prompts through OpenAI’s ChatGPT, Alphabet’s Gemini, and Meta’s Llama, the models refused.
The Future of Deepfake Fraud

In an ever-evolving online space of false information and realistic deepfakes, it can be hard to trust anything you see. This distrust of the media could have severe consequences spanning politics, science, and culture.
“That’ll be the big pain point here, the complete lack of trust in digital institutions, and institutions and material in general,” Fred Heiding added.
The new AI-centered world is forcing us to fact-check. To take a step back when we see something and question its authenticity.
A colleague of mine, who has teenage boys, recently expressed that her boys are quick to call her out for showing them AI-generated content. Which begs the question: Will we evolve to recognize AI when we see it, or will it continue evolving to fool us?
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Cover Photo: Speaker at the Summit on Responsible Artificial Intelligence in the Military Domain (REAIM) 2023. Cover Photo Credit: Wikimedia Commons











