

(Maxim Apryatin/Shutterstock)
Companies are flocking to GenAI technologies to help automate business functions, such as reading and writing emails, generating Java and SQL code, and executing marketing campaigns. At the same time, cybercriminals are also finding tools like WormGPT and FraudGPT useful for automating nefarious deeds, such as writing malware, distributing ransomware, and automating the exploitation of computer vulnerabilities around the Internet. With the pending release of API acceess to a language model dubbed DarkBERT into the criminal underground, the GenAI capabilities available to cybercriminals could increase significantly.
On July 13, researchers with SlashNext reported the emergence of WormGPT, an AI-powered tool that’s being actively utilized by cybercriminals. About two weeks later, it let the world know about another digital creation from the criminal underground, dubbed FraudGPT. FraudGPT is being promoted by its creator, who goes by the name “CanadianKingpin12,” as an “exclusive bot” designed for fraudsters, hackers, spammers, SlashNext says in a blog post this week.
FraudGPT is replete with a number of advanced GenAI capabilities, according to an ad posted on a cybercrime forum discovered by SlashNext, including:
- Write malicious code;
- Create undetectable malware;
- Create phishing pages;
- Create hacking tools;
Image from a video produced by cybercriminals and shared by SlashNext
- Write scam pages / letters;
- Find leaks and vulnerabilities;
- Find “cardable” sites;
- “And much more | sky is the limit.”
When SlashNext contacted the malware’s author, the author insisted that FraudGPT was superior to WormGPT, which was the main goal that SlashNext had in the conversation. Then the malware author went to to say that he or she had two more malicious GenAI products in development, including DarkBART and DarkBERT, and that they would be ingrated with Google Lens, which gives the tools the capability to send text accompanied by images.
This perked up the ears of the security researchers at SlashNext, a Pleasanton, California company that provides protection against phishing and human hacking. DarkBERT is a large language model (LLM) created by a South Korean security research firm and trained on a large corpus of data culled from the Dark Web to fight cybercrime. It has not been publicly released, but CanadianKingpin12 claimed to have access to it (although it was not clear whether they actually did).
DarkBERT could potentially provide cybercriminals with a leg up in their malicious schemes. In his blog post, SlashNext’s Daniel Kelley, who identifies as “a reformed black hat computer hacker,” shares some of the potential ways that CanadianKingpin12 envisions the tool being used. They include:
- “Assisting in executing advanced social engineering attacks to manipulate individuals;”
- “Exploiting vulnerabilities in computer systems, including critical infrastructure;”
- “Enabling the creation and distribution of malware, including ransomware;”
- “The development of sophisticated phishing campaigns for stealing personal information;” and
- “Providing information on zero-day vulnerabilities to end-users.”
“While it’s difficult to accurately gauge the true impact of these capabilities, it’s reasonable to expect that they will lower the barriers for aspiring cybercriminals,” Kelley writes. “Moreover, the rapid progression from WormGPT to FraudGPT and now ‘DarkBERT’ in under a month, underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape.”
What’s more, just as OpenAI has enabled thousands of companies to leverage powerful GenAI capabilites through the power of APIs, so too will the cybercriminal underground leverage APIs.
“This advancement will greatly simplify the process of integrating these tools into cybercriminals’ workflows and code,” Kelley writes. “Such progress raises significant concerns about potential consequences, as the use cases for this type of technology will likely become increasingly intricate.”
The GenAI criminal activity recently caugh the eye of Cybersixgill, an Israeli security firm. According to Delilah Schwartz, who works in threat intel at Cybersixgill, all three products are being advertised for sale.
“Cybersixgill observed threat actors advertising FraudGPT and DarkBARD on cybercrime forums and Telegram, in addition to chatter about the tools,” Schwartz says. “Malicious versions of deep language learning models are currently a hot commodity on the underground, producing malicious code, creating phishing content, and facilitating other illegal activities. While threat actors abuse legitimate artificial intelligence (AI) platforms with workarounds that evade safety restrictions, malicious AI tools go a step further and are specifically designed to facilitate criminal activities.”
The company has noted ads promoting FraudGPT, FraudBot, and DarkBARD as “Swiss Army Knife hacking tools.”
“One ad explicitly stated the tools are designed for ‘fraudsters, hackers, spammers, [and] like-minded individuals,'” Schwartz says. “If the tools perform as advertised, they would certainly enhance a variety of attack chains. With that being said, there appears to be a dearth of actual reviews from users championing the products’ capabilities, despite the abundance of advertisements.”
Related Items:
Feds Boost Cyber Spending as Security Threats to Data Proliferate
Security Concerns Causing Pullback in Open Source Data Science, Anaconda Warns
Filling Cybersecurity Blind Spots with Unsupervised Learning
September 21, 2023
- RelationalAI to Host ‘The Promise of AI and Relational Knowledge Graphs’ Webinar
- dbt Labs to Reveal Exciting New Features and Products at Coalesce 2023
- ibi Unleashes the Power of Legacy Systems with Open Data Hub for Mainframe
- Dataiku Partners with the Brilliant Club to Boost University Access for Disadvantaged Students
- Oracle Adds AI Capabilities to Oracle Analytics Cloud
- SingleStore Announces New Real-Time Generative AI Event, Features and Partnerships
- Timeplus Open Sources Its Powerful Streaming Analytics Engine for Developers Globally
- MongoDB Launches Atlas for Manufacturing and Automotive
- Cisco to Acquire Splunk, to Help Make Organizations More Secure and Resilient in an AI-Powered World
- Oracle Continues MySQL HeatWave Innovation with Vector Store and New Generative AI Capabilities
- Galileo Introduces LLM Studio: Enhanced Modules to Fine-tune, Prompt, and Monitor Generative AI Outputs
- Presto Foundation Announces Contributions by IBM and Uber that Reinforce Commitment to the Presto Open Source Community
- SAP Store Now Features BioData’s Labguru: A Comprehensive Life Science Data Management Solution
- TruEra Launches Comprehensive AI Tool with Enhanced Capabilities to Address LLM Risks and Enhance Development
- Airbyte Users Create More than 1,500 Data Integration Connectors with No-Code Builder
- Privacera’s Integration with Collibra Creates End-to-End Data Governance
September 20, 2023
- Telmai Unveils New AI-Driven Data Observability Features in Latest Release
- Starburst Expands Data Lake Analytics Platform to Support Every Stage of the Cloud Journey
- Prominent Authors File Class Action Against OpenAI Alleging Copyright Infringement Through LLMs
- New MITRE-Harris Poll Reveals Decline in Public Trust of AI Technologies from Last Year
Most Read Features
- Databricks Versus Snowflake: Comparing Data Giants
- Data Mesh Vs. Data Fabric: Understanding the Differences
- Big Data File Formats Demystified
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- What Is MosaicML, and Why Is Databricks Buying It For $1.3B?
- How Generative AI Is Transforming the Call Center Market
- What Does ChatGPT for Your Enterprise Really Mean?
- Hallucinations, Plagiarism, and ChatGPT
- Google Claims Its TPU v4 Outperforms Nvidia A100
- Duet AI Goes Everywhere in Google’s Cloud
- More Features…
Most Read News In Brief
- Mathematica Helps Crack Zodiac Killer’s Code
- GenAI Debuts Atop Gartner’s 2023 Hype Cycle
- Python Finally Comes to Excel
- Is ChatGPT Getting Dumber?
- Top 10 In-Demand GenAI Skills
- Neo4j Finds the Vector for Graph-LLM Integration
- DSPy Puts ‘Programming Over Prompting’ in AI Model Development
- Starburst Brings Dataframes Into Trino Platform
- Data Fabric Firm Denodo Raises $336 Million
- Anaconda’s Commercial Fee Is Paying Off, CEO Says
- More News In Brief…
Most Read This Just In
- Salesforce and Snowflake Make Data Sharing-Based Integration Generally Available
- Salesforce and AWS Deepen Generative AI Partnership with Harmonized Customer Profile Offerings
- Dataiku Announces Breakthroughs in Generative AI Enterprise Applications, Safety, and Tooling
- NetApp and Google Cloud Introduce Managed Storage Service to Revolutionize Enterprise Workloads in the Cloud
- Google Cloud Unveils New Generative AI Innovations and Partnerships at Next ’23
- DataStax Unveils New JSON API for Astra DB, Catering to 13M Global JavaScript Developers
- Groq Achieves Doubling of Llama-2 70B LLM Inference Performance in 3 Weeks
- Quantum Announces Next-Gen Cold Data Storage Solutions, Simplifying Cloud Integration
- Habu Unveils Generative AI Capabilities on Google Cloud to Create Value From Data Clean Rooms
- IBM to Participate in $235M Series D Funding Round of Hugging Face
- More This Just In…
Sponsored Partner Content
Sponsored Whitepapers
Contributors
Featured Events
-
AI & Big Data Expo Europe 2023
September 26 @ 8:00 am - September 27 @ 5:00 pm -
AI in Healthcare Summit
November 14 - November 15 -
The AI Summit New York
December 6 - December 7