A new evil AI has just been born. Called WormGPT, this chatbot is able to assist cybercriminals in their illegal activities. Devoid of limits, it represents a serious threat to all Internet users..
While investigating an underground forum favored by cybercriminals, computer security researchers at SlashNext discovered the emergence of a new AI, WormGPT. The chatbot is specifically designed to assist its interlocutors in their malicious activities. It was developed by a hacker, who now sells the tool on black markets.
WormGPT, the dark side of AI
Mirroring FreedomGPT, WormGPT is a “black alternative to OpenAI’s GPT models”, which are framed by a series of restrictions and drive ChatGPT. The chatbot is not limited by ethical or legal considerations. Clearly, he can answer questions about viruses, computer attacks or scams by providing all the details imaginable. Conversely, ChatGPT and others generally refuse to discuss this type of subject.
WormGPT is based on GPT-J, an open-source language model developed by EleutherAI, a non-profit research group specializing in AI. Cybercriminals exploited this pattern and trained it with “malware-related data”. Thanks to this mountain of information, the AI has specialized in illicit activities online.
Among the use cases of WormGPT is the creation of personalized and convincing phishing emails. SlashNext researchers also had the opportunity to test the AI by asking it to generate a convincing email to use in a "business email compromise attack", or Business Email Compromise (BEC). This type of attack aims to manipulate an employee into disclosing sensitive company data or sending money.
In this case, this email was intended to trap an account manager by asking him “to pay an invoice urgently”. To fool its target, the chatbot pretended to be the CEO of the company. The researchers were flabbergasted by the message devised by WormGPT. Within seconds, the bot generated “an email that was not only remarkably persuasive, but also strategically cunning.”
The conversational robot first pulls out of the game thanks to “impeccable grammar”. This is also one of the clues that generally allow you to spot a fake email. Spelling errors, strange and convoluted turns of phrase and syntax errors often ruin the trap set by hackers. This is not the case with an email written by the AI, which is much more difficult to identify. De facto, AI is a formidable asset for hackers who wish to deploy phishing attacks, whether against the business world or ordinary Internet users.
“This experience underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals,” SlashNext summarizes after its investigation.
Hackers trick ChatGPT and Google Bard
At the same time, many cybercriminals have found a way to trap artificial intelligences. On forums, hackers have posted a series of requests capable of circumventing the restrictions of ChatGPT, Google Bard and others. Clearly, hackers explain to their peers how to perform a prompt-injection attack on chatbots. This type of attack can convince an AI to ignore its programming and bypass the restrictions put in place by its creators.
Once the attack has been carried out, the AIs will obediently assist the criminals and answer their questions. For example, hackers are currently using ChatGPT to enhance their phishing messages, even if they don't know the language of their victims well, SlashNext explains:
“attackers, even those without language proficiency, are now more capable than ever of crafting persuasive emails.”
Strategies facilitating the manipulation of AI are currently shared on dark web black markets or specialized forums. Europol, the European criminal police agency, had previously highlighted the "vast potential" of generative AI for cybercriminals. In a report published in March, the agency estimated that hackers, scammers and other brigands were already relying heavily on chatbots to write phishing emails, code viruses or manipulate Internet users.
Comments
Post a Comment