Introduction to dark AI

As the use of AI becomes more widespread and mainstream, it shouldn’t be a surprise that it is also being used for malicious purposes. Innovation can be a double-edged sword, with breakthrough milestones representing opportunities for increased productivity as well as potential weaponization. Dark AI refers to the application of AI technologies — and notably, recent innovations in generative AI (GenAI) — for the purposes of accelerating or enabling cyberattacks. Dark AI is adept at learning and adapting its techniques to breach security systems.

In this article, we'll examine dark AI and the challenges it brings to modern cybersecurity. We'll also look specifically at FraudGPT, an example of dark AI in cybercrime, and explore protection strategies against such threats.

What is dark AI?

Unlike traditional AI — which can be applied to improve efficiency, enhance decision-making, or automate tasks — dark AI is specifically engineered to leverage these benefits to conduct cyberattacks, infiltrating systems and manipulating data. The core function of dark AI is to exploit vulnerabilities in digital infrastructures. The effects of dark AI often go unnoticed until significant damage is already done.

Unlike conventional AI

Conventional AI systems are often built with ethical guidelines and beneficial objectives. In contrast, dark AI operates without these constraints and has a malicious intent and application. While mimicking human behavior through GenAI technologies, dark AI might create deceptive content or self-learn to achieve its objectives. By learning to bypass existing cybersecurity measures, dark AI becomes particularly dangerous.

Challenges to cybersecurity

Because its evolving nature can outsmart conventional security defenses, dark AI complicates the detection and mitigation of cyber threats. It can also be used to automate sophisticated attacks at scale. This poses a threat to a wide range of information and digital systems, from personal data to critical infrastructure.

Lowering the barrier to entry for threat actors

The advent of GenAI has significantly lowered the barrier to entry for malicious attackers. Enabled by tools that leverage GenAI, threat actors can conduct faster, highly sophisticated attacks, and they no longer need to have a high level of technical expertise.

crowdcast-threat-report-image

2023 Threat Hunting Report

In the 2023 Threat Hunting Report, CrowdStrike’s Counter Adversary Operations team exposes the latest adversary tradecraft and provides knowledge and insights to help stop breaches. 

Download Now

FraudGPT: a dark AI example

A real-world example of dark AI is FraudGPT, a tool designed for cybercriminal activities and sold on the dark web. FraudGPT is a GenAI tool with an interface similar to ChatGPT that is designed to enable threat actors to:

  • Write malicious code
  • Create undetectable malware
  • Create phishing pages
  • Create hacking tools
  • Find leaks and vulnerabilities
  • Write scam pages/letters

FraudGPT was discovered by cybersecurity researchers at Netenrich in July 2023. At the time, researchers saw the emergence of FraudGPT on Telegram channels and in dark web forums. The advertisement for FraudGPT on the dark web included a video of the tool at work. The researchers at Netenrich captured and posted a screenshot from that advertisement:

Just as ChatGPT lowered the bar for users to perform higher-quality research, summarization, and writing, FraudGPT has done the same for individuals or groups who wish to orchestrate cyberattacks. By leveraging the advanced capabilities of AI for harmful purposes, FraudGPT embodies how dark AI is reshaping the cyber threat landscape.

Impact on organizations and consumers

The emergence of FraudGPT and other tools like it has significant implications for both organizations and consumers. Dark AI tools represent a new era in which AI tools can be trained to penetrate networks with malware, write ransomware to take corporate systems hostage, or generate deepfakes for exploitation or extortion. This poses a grave threat to corporate networks and personal data security.

What are the implications of dark AI for an organization’s cybersecurity practices? The growing use of dark AI underscores the need for more advanced tools to detect and identify AI-driven threats. Furthermore, organizations need expertise and readiness to mitigate these threats, and they need to be prepared to combat a higher volume of attacks than ever before. The adaptability of tools like FraudGPT necessitates a shift in today’s cybersecurity approach, with a strong emphasis on AI/machine learning (ML) to anticipate and counteract such threats.

crowdcast-image

CrowdCast: The Convergence of AI and Cybersecurity

AI and cybersecurity are colliding now more than ever. The positive power of AI is apparent with increased efficiency, cost savings, and more. Unfortunately, the same is true when those benefits get into the wrong hands.

Learn More

How to protect against dark AI

Protecting against dark AI tools like FraudGPT involves a blend of enhanced awareness, advanced technology, high-fidelity threat intelligence, and collaborative efforts.

As a first step, organizations should conduct training to educate employees about these threats. Informed individuals are better equipped to recognize and report suspicious activities, such as phishing attempts. This human element is vital in your organization’s overall defense strategy.

Next, modern organizations should adopt AI-native cybersecurity tools that can match the sophistication and speed of threats originating from dark AI. AI-native tools can detect unusual patterns indicative of dark AI activity, and — like the threats they counteract — they can continually evolve to stay effective.

GenAI and other AI/ML technologies are enabling cybercriminals to conduct their attacks closer and closer to machine speed. Therefore, AI-native tools are critical for defenders to reclaim the speed advantage against these adversaries.

Lastly, organizations should work in close collaboration with the cybersecurity community. By exchanging information, organizations can advance the industry’s awareness of emerging tradecraft and proactively fortify themselves against dark AI developments.

The role of CrowdStrike in combating dark AI

The rise of dark AI represents a significant and escalating threat in modern cybersecurity. Encompassing malicious tools like FraudGPT, dark AI is capable of evolving and adapting, making subsequent threats harder to detect and counteract. Dark AI also brings the ability to automate complex cyberattacks at scale, which poses a substantial risk to organizations everywhere.

CrowdStrike plays a pivotal role in the fight against dark AI threats. CrowdStrike Falcon® Adversary Intelligence is specifically designed to keep an eye on the dark web, providing critical insights into the activities of cybercriminals. It provides real-time dark web monitoring for cybercrime activities, tracking malicious activity within criminal forums, criminal marketplaces, and underground communities. Falcon Adversary Intelligence and the Falcon platform assists security teams to optimize the entire security stack through automated intelligence orchestration, contextual enrichment, and AI-native investigative tools

Learn More

For more information on the Falcon platform and Falcon Adversary Intelligence, attend a Falcon Encounter hands-on lab.

Register Today