

(Maxim Apryatin/Shutterstock)
Companies are flocking to GenAI technologies to help automate business functions, such as reading and writing emails, generating Java and SQL code, and executing marketing campaigns. At the same time, cybercriminals are also finding tools like WormGPT and FraudGPT useful for automating nefarious deeds, such as writing malware, distributing ransomware, and automating the exploitation of computer vulnerabilities around the Internet. With the pending release of API acceess to a language model dubbed DarkBERT into the criminal underground, the GenAI capabilities available to cybercriminals could increase significantly.
On July 13, researchers with SlashNext reported the emergence of WormGPT, an AI-powered tool that’s being actively utilized by cybercriminals. About two weeks later, it let the world know about another digital creation from the criminal underground, dubbed FraudGPT. FraudGPT is being promoted by its creator, who goes by the name “CanadianKingpin12,” as an “exclusive bot” designed for fraudsters, hackers, spammers, SlashNext says in a blog post this week.
FraudGPT is replete with a number of advanced GenAI capabilities, according to an ad posted on a cybercrime forum discovered by SlashNext, including:
- Write malicious code;
- Create undetectable malware;
- Create phishing pages;
- Create hacking tools;
Image from a video produced by cybercriminals and shared by SlashNext
- Write scam pages / letters;
- Find leaks and vulnerabilities;
- Find “cardable” sites;
- “And much more | sky is the limit.”
When SlashNext contacted the malware’s author, the author insisted that FraudGPT was superior to WormGPT, which was the main goal that SlashNext had in the conversation. Then the malware author went to to say that he or she had two more malicious GenAI products in development, including DarkBART and DarkBERT, and that they would be ingrated with Google Lens, which gives the tools the capability to send text accompanied by images.
This perked up the ears of the security researchers at SlashNext, a Pleasanton, California company that provides protection against phishing and human hacking. DarkBERT is a large language model (LLM) created by a South Korean security research firm and trained on a large corpus of data culled from the Dark Web to fight cybercrime. It has not been publicly released, but CanadianKingpin12 claimed to have access to it (although it was not clear whether they actually did).
DarkBERT could potentially provide cybercriminals with a leg up in their malicious schemes. In his blog post, SlashNext’s Daniel Kelley, who identifies as “a reformed black hat computer hacker,” shares some of the potential ways that CanadianKingpin12 envisions the tool being used. They include:
- “Assisting in executing advanced social engineering attacks to manipulate individuals;”
- “Exploiting vulnerabilities in computer systems, including critical infrastructure;”
- “Enabling the creation and distribution of malware, including ransomware;”
- “The development of sophisticated phishing campaigns for stealing personal information;” and
- “Providing information on zero-day vulnerabilities to end-users.”
“While it’s difficult to accurately gauge the true impact of these capabilities, it’s reasonable to expect that they will lower the barriers for aspiring cybercriminals,” Kelley writes. “Moreover, the rapid progression from WormGPT to FraudGPT and now ‘DarkBERT’ in under a month, underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape.”
What’s more, just as OpenAI has enabled thousands of companies to leverage powerful GenAI capabilites through the power of APIs, so too will the cybercriminal underground leverage APIs.
“This advancement will greatly simplify the process of integrating these tools into cybercriminals’ workflows and code,” Kelley writes. “Such progress raises significant concerns about potential consequences, as the use cases for this type of technology will likely become increasingly intricate.”
The GenAI criminal activity recently caugh the eye of Cybersixgill, an Israeli security firm. According to Delilah Schwartz, who works in threat intel at Cybersixgill, all three products are being advertised for sale.
“Cybersixgill observed threat actors advertising FraudGPT and DarkBARD on cybercrime forums and Telegram, in addition to chatter about the tools,” Schwartz says. “Malicious versions of deep language learning models are currently a hot commodity on the underground, producing malicious code, creating phishing content, and facilitating other illegal activities. While threat actors abuse legitimate artificial intelligence (AI) platforms with workarounds that evade safety restrictions, malicious AI tools go a step further and are specifically designed to facilitate criminal activities.”
The company has noted ads promoting FraudGPT, FraudBot, and DarkBARD as “Swiss Army Knife hacking tools.”
“One ad explicitly stated the tools are designed for ‘fraudsters, hackers, spammers, [and] like-minded individuals,'” Schwartz says. “If the tools perform as advertised, they would certainly enhance a variety of attack chains. With that being said, there appears to be a dearth of actual reviews from users championing the products’ capabilities, despite the abundance of advertisements.”
Related Items:
Feds Boost Cyber Spending as Security Threats to Data Proliferate
Security Concerns Causing Pullback in Open Source Data Science, Anaconda Warns
Filling Cybersecurity Blind Spots with Unsupervised Learning
June 30, 2025
- Campfire Raises $35 Million Series A Led by Accel to Build the Next-Generation AI-Driven ERP
- Intel Xeon 6 Slashes Power Consumption for Nokia Core Network Customers
- Equal Opportunity Ventures Leads Investment in Manta AI to Redefine the Future of Data Science
- Tracer Protect for ChatGPT to Combat Rising Enterprise Brand Threats from AI Chatbots
June 27, 2025
- EarthDaily Ignites a New Era in Earth Observation with Landmark Satellite Launch
- Domo Deepens Collaboration with Snowflake to Accelerate AI-Driven Analytics and Data Integration on the AI Data Cloud
- AIwire Launches Annual People to Watch Program
June 26, 2025
- Thomson Reuters: Firms with AI Strategies Twice as Likely to See AI-driven Revenue Growth
- DataBahn Raises $17M Series A to Advance AI-Native Data Pipeline Platform
- BCG Report: Companies Must Go Beyond AI Adoption to Realize Its Full Potential
- H2O.ai Breaks New World Record for Most Accurate Agentic AI for Generalized Assistants
- Foresight Raises $5.5M Seed Round to Bring Unified Data and AI to the Private Market
- Treasure Data Launches MCP Server: Let Your LLM Talk to Your Data
- Fujitsu Strengthens Global Consulting with Focus on AI, Data, and Sustainability
- HPE Expands ProLiant Gen12 with New AMD Servers
- Emergence AI Launches CRAFT for Natural Language Data Workflow Automation
- Overture Maps Launches GERS, a Global Standard for Interoperable Geospatial IDs, to Drive Data Interoperability
June 25, 2025
- Inside the Chargeback System That Made Harvard’s Storage Sustainable
- What Are Reasoning Models and Why You Should Care
- Databricks Takes Top Spot in Gartner DSML Platform Report
- Snowflake Widens Analytics and AI Reach at Summit 25
- Why Snowflake Bought Crunchy Data
- Change to Apache Iceberg Could Streamline Queries, Open Data
- Agentic AI Orchestration Layer Should be Independent, Dataiku CEO Says
- Top-Down or Bottom-Up Data Model Design: Which is Best?
- LinkedIn Introduces Northguard, Its Replacement for Kafka
- The Evolution of Time-Series Models: AI Leading a New Forecasting Era
- More Features…
- Mathematica Helps Crack Zodiac Killer’s Code
- AI Agents To Drive Scientific Discovery Within a Year, Altman Predicts
- Solidigm Celebrates World’s Largest SSD with ‘122 Day’
- DuckLake Makes a Splash in the Lakehouse Stack – But Can It Break Through?
- ‘The Relational Model Always Wins,’ RelationalAI CEO Says
- Confluent Says ‘Au Revoir’ to Zookeeper with Launch of Confluent Platform 8.0
- Supabase’s $200M Raise Signals Big Ambitions
- The Top Five Data Labeling Firms According to Everest Group
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- Toloka Expands Data Labeling Service
- More News In Brief…
- Astronomer Unveils New Capabilities in Astro to Streamline Enterprise Data Orchestration
- Astronomer Introduces Astro Observe to Provide Unified Full-Stack Data Orchestration and Observability
- BigID Reports Majority of Enterprises Lack AI Risk Visibility in 2025
- Databricks Unveils Databricks One: A New Way to Bring AI to Every Corner of the Business
- Snowflake Openflow Unlocks Full Data Interoperability, Accelerating Data Movement for AI Innovation
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- Databricks Donates Declarative Pipelines to Apache Spark Open Source Project
- Code.org, in Partnership with Amazon, Launches New AI Curriculum for Grades 8-12
- Zscaler Unveils Business Insights with Advanced Analytics for Smarter SaaS Spend and Resource Allocation
- More This Just In…