Oct 29, 2025

ScamGPT: Malicious AI and the Future of Scams

ScamGPT: Malicious AI and the Future of Scams

ScamGPT: Malicious AI and the Future of Scams

The generative AI tools that have revolutionized productivity are now powering digital fraud. They now include deepfakes, cloned voices, and synthetic identities. To combat AI-driven fraud, businesses need threat intelligence, which requires a combination of behavioral biometrics, device intelligence, and graph-based identity insights.

Author

Team Bureau

Table of Contents

AI-powered scams scale faster, are more persuasive, and harder to trace. Fraudsters are using AI and generative tools more than ever before. Malicious AI is powering complex attacks at scale, with a polished and automated approach.

The rise of generative AI fraud: When technology turns rogue

Democratization of AI is reshaping the fraud landscape. Open-source LLMs are now a part of fraud toolkits. Even unskilled people can use them to create believable content, real voices, or synthetic personas. As a result, fraud has now become an open playground even for individuals with little to no tech skills.

Fraudsters use AI and generative tools to launch complex attacks. This is because AI-powered scams scale faster, are more persuasive and harder to trace. They can create phishing emails that look real, clone voices that sound real, and automate social engineering tasks. These factors make it challenging for banks and fintechs to accurately identify real users. According to Datos, 93% of financial institutions find it hard to fight generative AI-powered attacks.

Underground AI models: The dark side of large language models (LLMs)

The legitimate AI ecosystem is only the tip of the iceberg. Beneath it lies a parallel world, replete with large language models (LLMs) that specialize in crime. These purpose-built tools help evade filters, create phishing scripts, and generate malicious code. They are sold as AI-as-a-Service through data harvest groups on Telegram and encrypted marketplaces. They come with support, updates, and prompt libraries for different fraud use cases.

Some examples include:

FraudGPT: Also known as a ‘crime co-pilot,’ it helps write persuasive emails, social media messages, and scam scripts for higher success rates.

WormGPT: Built on an open-source LLM, it automates phishing attacks and generates exploit code.

DarkBERT / DarkBART: These models are trained on datasets from the dark web. They understand underground slang, enabling contextual communication among cybercriminals.

Deepfake ecosystem: Tools turning illusion into reality

The generative AI ecosystem has evolved into a fully equipped fraud factory. It includes tools and technology categories that support different stages of deception.These include:

  • Voice cloning: Recreate anyone’s voice with a few seconds of audio. Powers CEO fraud and vishing attacks to impersonate trusted authorities.

  • Video generation: Produce realistic talking-head videos. Used to spoof ID verification calls or fake onboarding interviews.

Image generation: Forge KYC documents and generate convincing profile photos for synthetic identities.

Anatomy of an AI-powered scam

Every AI-powered scam follows a playbook. This ‘fraud chain’ pattern mirrors the process followed in automated marketing pipelines. The typical steps include:

  1. Data harvesting: Collecting users’ personal information from leaked databases or open social media profiles. Specialized data harvest groups mine, cluster, and enrich this data for targeting.

  2. Identity fabrication: Using generative models to synthesize faces, voices, and documents. These are then used to build fake personas and pass KYC or new account creation checks.

  3. Engagement: Using AI-chatbots for human-like conversations through WhatsApp, Telegram, or customer support channels. Programmed voice clones may pose as business advisors (bank, telecom operator) and call victims.

  4. Persuasion: LLMs personalize dialogue in real-time, mirroring tone and sentiment. The AI adapts dynamically to exploit emotional or behavioral cues and build trust.

  5. Execution and evasion: Hide the digital trails of stolen funds or credentials using fake IPs, device spoofing, and other AI tools.

Businesses can use behavioral biometrics to identify synthetic or FraudGPT-assisted sessions. It authenticates users based on patterns and signals that AI cannot convincingly mimic. These include typing cadence, mouse movement, and session rhythm. With Bureau’s Graph Identity Network (GIN), it connects devices, identities, and behaviors. This helps businesses intercept anomalies early in the chain and take timely countermeasures.

Case insight 1: The “bank advisor” voice clone scam

In the “bank advisor” scam, fraudsters use stolen customer details and deepfake audio. They train these cloned voices on snippets from legitimate marketing calls found online. They then call up existing customers posing as bank advisors. For greater authenticity, they add familiar accents and background call-center noise. The “advisors” “verify” the account details and confirm transactions with customers for security “checks”. By the time the bank spots the fraud, bad actors have already cheated several customers and stolen their funds. 

Behavioral analytics flags deviations in session patterns even when the voice seems authentic. These include typing speed, app navigation, and device familiarity. Combined with device intelligence, it can detect spoofing and emulated environments. This helps flag fraudulent activity and prevents funds from being stolen.


Case insight 2: Fake tech support and AI-driven chat scams

Another fast-rising fraud type involves AI-powered customer service chatbots and fake tech support or. Fraudsters lure unsuspecting users to fake support pages. AI chat models simulate legitimate service workflows as “agents.” Fraudsters may also integrate deepfake voice calls, cloned from brand training videos. They engage the customers to confirm the steps and display security prompts. After building rapport, the “agents” request remote access from the users. They may also direct users to install malware under the guise of troubleshooting.

Device fingerprinting can detect login sessions that originate from a remote desktop or emulated device. It can flag high-risk patterns that are invisible to standard security systems. With graph-based intelligence, it can link fraudulent device clusters to expose data harvest group activity in real time.

Case insight 3: Emotion-driven fraud - The family emergency deepfake

The most distressing evolution of AI fraud is its emotional manipulation. In one case, a retiree in Europe received a frantic video call from their “grandson.” The request was for funds for an emergency surgery. The fraudster stole the details of the grandson from social media. They then created deepfake-driven face, voice, and emotional expressions. The retiree could not distinguish the fraudster from the real grandson. The family realized the deception only after large amounts of money had already been transferred.

Fraudsters now use human trust as the new attack surface. Therefore, behavioral and contextual intelligence are critical in fraud prevention. Fraud defense solutions need to analyze the “how” in addition to the “who.” This includes looking at the location, device behavior, and interaction consistency.

The way forward: Fighting AI with intelligence

Generative AI has shifted the balance of power towards fraudsters. Therefore, the next phase of fraud prevention is not only about blocking AI; it’s about outsmarting it.

That said, traditional defenses rely on static rules or identity proofs. As a result, they are bound to fail against synthetic identities and cloned voices. Businesses need multi-signal trust intelligence to fight digital fraud. This intelligence should look at the behavior, device history, and network connections to examine the context, not just the credentials.

At Bureau, this is called fighting AI with intelligence. Bureau unifies signals from behavioral biometrics, device ID, and Graph Identity Network (GIN) to provide businesses with a 360° view of identity trust. This helps detect the invisible patterns that expose all AI-assisted fraud.

Fraud will continue to evolve, and so will intelligence. The right systems, vigilance, and data collaboration can help digital businesses stay a step ahead of AI-for-fraud. 

To gain deeper insights into how AI is reshaping fraud across industries and how Bureau powers proactive defenses, talk to an expert now.

AI-powered scams scale faster, are more persuasive, and harder to trace. Fraudsters are using AI and generative tools more than ever before. Malicious AI is powering complex attacks at scale, with a polished and automated approach.

The rise of generative AI fraud: When technology turns rogue

Democratization of AI is reshaping the fraud landscape. Open-source LLMs are now a part of fraud toolkits. Even unskilled people can use them to create believable content, real voices, or synthetic personas. As a result, fraud has now become an open playground even for individuals with little to no tech skills.

Fraudsters use AI and generative tools to launch complex attacks. This is because AI-powered scams scale faster, are more persuasive and harder to trace. They can create phishing emails that look real, clone voices that sound real, and automate social engineering tasks. These factors make it challenging for banks and fintechs to accurately identify real users. According to Datos, 93% of financial institutions find it hard to fight generative AI-powered attacks.

Underground AI models: The dark side of large language models (LLMs)

The legitimate AI ecosystem is only the tip of the iceberg. Beneath it lies a parallel world, replete with large language models (LLMs) that specialize in crime. These purpose-built tools help evade filters, create phishing scripts, and generate malicious code. They are sold as AI-as-a-Service through data harvest groups on Telegram and encrypted marketplaces. They come with support, updates, and prompt libraries for different fraud use cases.

Some examples include:

FraudGPT: Also known as a ‘crime co-pilot,’ it helps write persuasive emails, social media messages, and scam scripts for higher success rates.

WormGPT: Built on an open-source LLM, it automates phishing attacks and generates exploit code.

DarkBERT / DarkBART: These models are trained on datasets from the dark web. They understand underground slang, enabling contextual communication among cybercriminals.

Deepfake ecosystem: Tools turning illusion into reality

The generative AI ecosystem has evolved into a fully equipped fraud factory. It includes tools and technology categories that support different stages of deception.These include:

  • Voice cloning: Recreate anyone’s voice with a few seconds of audio. Powers CEO fraud and vishing attacks to impersonate trusted authorities.

  • Video generation: Produce realistic talking-head videos. Used to spoof ID verification calls or fake onboarding interviews.

Image generation: Forge KYC documents and generate convincing profile photos for synthetic identities.

Anatomy of an AI-powered scam

Every AI-powered scam follows a playbook. This ‘fraud chain’ pattern mirrors the process followed in automated marketing pipelines. The typical steps include:

  1. Data harvesting: Collecting users’ personal information from leaked databases or open social media profiles. Specialized data harvest groups mine, cluster, and enrich this data for targeting.

  2. Identity fabrication: Using generative models to synthesize faces, voices, and documents. These are then used to build fake personas and pass KYC or new account creation checks.

  3. Engagement: Using AI-chatbots for human-like conversations through WhatsApp, Telegram, or customer support channels. Programmed voice clones may pose as business advisors (bank, telecom operator) and call victims.

  4. Persuasion: LLMs personalize dialogue in real-time, mirroring tone and sentiment. The AI adapts dynamically to exploit emotional or behavioral cues and build trust.

  5. Execution and evasion: Hide the digital trails of stolen funds or credentials using fake IPs, device spoofing, and other AI tools.

Businesses can use behavioral biometrics to identify synthetic or FraudGPT-assisted sessions. It authenticates users based on patterns and signals that AI cannot convincingly mimic. These include typing cadence, mouse movement, and session rhythm. With Bureau’s Graph Identity Network (GIN), it connects devices, identities, and behaviors. This helps businesses intercept anomalies early in the chain and take timely countermeasures.

Case insight 1: The “bank advisor” voice clone scam

In the “bank advisor” scam, fraudsters use stolen customer details and deepfake audio. They train these cloned voices on snippets from legitimate marketing calls found online. They then call up existing customers posing as bank advisors. For greater authenticity, they add familiar accents and background call-center noise. The “advisors” “verify” the account details and confirm transactions with customers for security “checks”. By the time the bank spots the fraud, bad actors have already cheated several customers and stolen their funds. 

Behavioral analytics flags deviations in session patterns even when the voice seems authentic. These include typing speed, app navigation, and device familiarity. Combined with device intelligence, it can detect spoofing and emulated environments. This helps flag fraudulent activity and prevents funds from being stolen.


Case insight 2: Fake tech support and AI-driven chat scams

Another fast-rising fraud type involves AI-powered customer service chatbots and fake tech support or. Fraudsters lure unsuspecting users to fake support pages. AI chat models simulate legitimate service workflows as “agents.” Fraudsters may also integrate deepfake voice calls, cloned from brand training videos. They engage the customers to confirm the steps and display security prompts. After building rapport, the “agents” request remote access from the users. They may also direct users to install malware under the guise of troubleshooting.

Device fingerprinting can detect login sessions that originate from a remote desktop or emulated device. It can flag high-risk patterns that are invisible to standard security systems. With graph-based intelligence, it can link fraudulent device clusters to expose data harvest group activity in real time.

Case insight 3: Emotion-driven fraud - The family emergency deepfake

The most distressing evolution of AI fraud is its emotional manipulation. In one case, a retiree in Europe received a frantic video call from their “grandson.” The request was for funds for an emergency surgery. The fraudster stole the details of the grandson from social media. They then created deepfake-driven face, voice, and emotional expressions. The retiree could not distinguish the fraudster from the real grandson. The family realized the deception only after large amounts of money had already been transferred.

Fraudsters now use human trust as the new attack surface. Therefore, behavioral and contextual intelligence are critical in fraud prevention. Fraud defense solutions need to analyze the “how” in addition to the “who.” This includes looking at the location, device behavior, and interaction consistency.

The way forward: Fighting AI with intelligence

Generative AI has shifted the balance of power towards fraudsters. Therefore, the next phase of fraud prevention is not only about blocking AI; it’s about outsmarting it.

That said, traditional defenses rely on static rules or identity proofs. As a result, they are bound to fail against synthetic identities and cloned voices. Businesses need multi-signal trust intelligence to fight digital fraud. This intelligence should look at the behavior, device history, and network connections to examine the context, not just the credentials.

At Bureau, this is called fighting AI with intelligence. Bureau unifies signals from behavioral biometrics, device ID, and Graph Identity Network (GIN) to provide businesses with a 360° view of identity trust. This helps detect the invisible patterns that expose all AI-assisted fraud.

Fraud will continue to evolve, and so will intelligence. The right systems, vigilance, and data collaboration can help digital businesses stay a step ahead of AI-for-fraud. 

To gain deeper insights into how AI is reshaping fraud across industries and how Bureau powers proactive defenses, talk to an expert now.

17th floor, A Wing, Parinee Crescenzo, G Block BKC,

Bandra Kurla Complex, Bandra East, Mumbai, Maharashtra 400051

548 Market Street, PMB 99348,

San Francisco CA 94104

20 Bendemeer Road #03-12,

Singapore 339914

© 2025 Bureau . All rights reserved.

© 2025 Bureau . All rights reserved. Privacy Policy. Terms of Service.

© 2025 Bureau . All rights reserved.

Privacy Policy. Terms of Service.

Follow Us

Leave behind fragmented tools. Stop fraud rings, cut false declines, and deliver secure digital journeys at scale

17th floor, A Wing, Parinee Crescenzo, G Block BKC,

Bandra Kurla Complex, Bandra East, Mumbai, Maharashtra 400051

BUREAU, INC. (BUREAU) 548 Market Street, PMB 99348,

San Francisco CA 94104

Junoon Tech PTE LTD 20 Bendemeer Road #03-12,

Singapore 339914

Leave behind fragmented tools. Stop fraud rings, cut false declines, and deliver secure digital journeys at scale

17th floor, A Wing, Parinee Crescenzo, G Block BKC,

Bandra Kurla Complex, Bandra East, Mumbai, Maharashtra 400051

BUREAU, INC. (BUREAU) 548 Market Street, PMB 99348,

San Francisco CA 94104

Junoon Tech PTE LTD 20 Bendemeer Road #03-12,

Singapore 339914