Skip to content

AI Is Being Used for Phishing. Now What?

Email templates. College essays. Song lyrics. Career advice. These are just a few of the things that new tools like OpenAI’s ChatGPT can produce in a matter of seconds.

Now, though, experts are adding a more dubious achievement to ChatGPT’s impressive resume: phishing and social engineering.

What’s going on? Why are cybercriminals getting a boost from ChatGPT and AI, and what can organizations do to protect themselves? Read on to find out more.

Attackers get chatty with ChatGPT

According to a new report from the security firm NordVPN, the number of new posts about OpenAI’s chatbot ChatGPT on the dark web grew over 600% between January and February this year. NordVPN described these topics as “the dark web’s hottest topic.”

That’s because, Europol warns, ChatGPT’s ability to draft highly realistic written text makes it a useful tool for phishing and social engineering.

In phishing attacks, attackers use social engineering — personalized, authentic-sounding messages — to trick people into downloading malware. This method, while effective, is usually time-consuming to execute

Now, though, attackers can use chatbots to expedite their efforts, churning out emails and SMS text messages in a matter of moments. This allows them to perpetrate scams on an industrial scale in a fraction of the time.

Flawless grammar and fake logos

Threat actors aren’t just using ChatGPT to draft messages; they’re also using it to provide improved grammar and a more natural writing style. This enhances their writing and makes their messages more plausible, increasing the chance that their victim will be fooled.

Cybercriminals are also using AI tools to create credible-looking logos and other graphics for their phishing campaigns. It’s now easy to create content featuring fake logos to impersonate trusted organizations and trick users into clicking on malicious links.

Other AI-assisted cybercrimes

Beyond phishing, there’s a wide range of other cybercrimes that are getting a boost from artificial intelligence. One threat actor described using ChatGPT to help code up features of a dark web marketplace, including a cryptocurrency value monitoring system. Others are using the tool to generate convincing personas and even automate small talk for dating scams.

Below, we’ll cover a few of the top AI-assisted vectors that are emerging.

Using chatbots to recreate malware

In addition to facilitating social engineering, AI can offer suggestions for creating new malware. This significantly reduces the barrier to entry for threat actors lacking programming abilities or technical expertise.

For instance, even with just a rudimentary knowledge of cybersecurity and computer science, attackers can now use chatbots to create basic malware. Experiments have already shown that ChatGPT can successfully replicate known malware strains pulled from the dark web.

Other experiments have used ChatGPT to replicate malware described in cybersecurity research publications — including a Python-based stealer that conducts searches for common file types, duplicates them into a random folder within the Temp folder, compresses them into a ZIP file, and uploads them to a predetermined FTP server.

Selling stolen chatbot credentials

Some cybercriminals have taken it a step further and are now trading or selling stolen ChatGPT account credentials. Others are giving away the account credentials as free advertising, essentially promoting their own ability to steal login information.

Premium account credentials have been in high demand for attackers, as they enable cybercriminals to get around OpenAI’s geofencing restrictions and get unlimited access to ChatGPT. The practice has been on the rise since March 2023 and shows no signs of slowing down.

Using deepfakes to perpetrate fraud

Cyber attackers are increasingly using deepfakes — that is, videos, images, and voice recordings that have been digitally created or altered with artificial intelligence — to carry out their aims. We’ve already mentioned that these deepfakes can be useful for creating fake business logos in phishing campaigns, but we’re just scratching the surface.

One tactic involves creating deepfake websites that mimic legitimate ones. These malicious websites are designed to closely resemble the genuine sites of reputable businesses, often incorporating identical logos, layouts, and content. Users may enter personal information like login credentials or credit card numbers into these fraudulent platforms, leading to identity theft, financial loss, and data exposure.

Another tactic uses AI voice cloning to impersonate individuals in positions of authority or trust, such as company executives, customer support agents, or even family members. These convincing deepfake voices can be employed in phone calls, voicemails, or even voice-based authentication systems, deceiving victims into revealing sensitive information or compromising account security.

Protect your data from AI-assisted cyberattacks

With the world of cybercrime evolving as swiftly as the artificial intelligence tools it relies on, companies need to make sure their data is protected.

ShardSecure’s platform offers advanced data security, preventing access by unauthorized users and third parties. Even if an AI-assisted phishing attempt is successful, our technology can help mitigate the impact of malware and ransomware. Our data integrity checks detect when data is altered, and our self-healing feature transparently reconstructs that data to its earlier state in real-time.

Additionally, with our innovative, agentless approach to file-level protection, we’re able to protect unstructured data with no performance hit and no agents. We also offer strong data resilience and high availability for outages, attacks, and other unexpected disruptions.

To learn more about our platform, visit our solutions page.

Sources

ChatGPT Is Dark Web’s ‘Hottest Topic’ as Criminals Look To Weaponise AI | The Independent

Europol Sounds Alarm About Criminal Use of ChatGPT, Sees Grim Outlook | Reuters

Dark Web Criminals Plot ChatGPT Takeover — With 625% Rise in ‘Bot Hacking’ Posts | IFA

Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots | Forbes

Hacking ChatGPT: ‘The Dark Web's Hottest Topic’ | Virtualization Review

Stolen ChatGPT Premium Accounts up for Sale on the Dark Web | CSO Online

Deepfakes: Get Ready for Phishing 2.0 | Fast Company

Cybercriminals Are Using AI Voice Cloning Tools To Dupe Victims | CBS