GhostGPT: The New Weapon of Hackers

GhostGPT: The New Weapon of Hackers

Cybercriminals are now exploiting generative AI to refine their attacks.

If AI-based security tools strengthen resilience against cyber threats, cybercriminals are now exploiting generative AI to refine their attacks. One of the latest examples is GhostGPT, an uncensored AI chatbot specifically designed for cybercrime. Identified by researchers at Abnormal Security, GhostGPT allows hackers to generate malware, craft phishing emails, and develop exploits with alarming efficiency.

Malware and Exploit Development
  • Automatic generation of malicious code, ransomware, and backdoors.
  • Elimination of the need for advanced programming skills.
  • Proliferation of digital threats through more accessible tools.
Creation of Polymorphic Malware
  • Generation of evolving malware that constantly modifies its structure to evade antivirus detection.
  • Bypassing of signature-based detection solutions.
Phishing and BEC (Business Email Compromise) Scams
  • Creation of highly convincing phishing emails, impersonating brands or corporate executives.
  • Tests conducted by Abnormal Security: GhostGPT generated a fake DocuSign email so realistic that it could easily deceive users.
Automation of Social Engineering
  • Automatic generation of manipulative dialogues to trick victims into disclosing sensitive information.
  • Facilitation of spear-phishing attacks and frauds based on deepfakes.

Credit: Abnormal Security

How Does GhostGPT Work?

GhostGPT appears to use a wrapper to connect to a jailbroken version of ChatGPT or another large language model (LLM).

One of GhostGPT’s strengths lies in its ease of access. Available on Telegram, it allows cybercriminals to access it immediately via a subscription.

GhostGPT Pricing:
  • $50 per week.
  • $150 per month.
  • $300 per quarter.

No complex configuration is required. This means that cybercriminals without technical skills can create malicious content.

The creators of GhostGPT highlight fast response times and a “no logs” policy (no retention of interactions), making it easier to conceal illegal activities.

Researchers also report that GhostGPT has been viewed thousands of times on online forums, proving the growing interest of hackers in generative AI for creating malicious content.

The Era of Criminal Chatbots Is Just Beginning

Unfortunately, GhostGPT is not an isolated case. It is part of a growing trend of criminal AI tools, alongside WormGPT, WolfGPT, EscapeGPT, and FraudGPT, which have already been used to enhance cyberattacks. The rising popularity of GhostGPT, illustrated by thousands of views on online forums, demonstrates that cybercriminal interest in AI continues to grow.

As GhostGPT’s popularity increases, its creators appear to be becoming more cautious. Many chatbot promotion accounts have been deleted. Sales are beginning to shift to private channels, making access more exclusive. Discussions on cybercriminal forums have been closed, complicating the identification of those behind the project. As of today, there is no definitive information on the identity of GhostGPT’s developers.