WormGPT Might Become Hackers’ New Best Imaginary Friend

There is a new custom-trained version of LLM (Large Language Model) out there, and for the worst possible reasons. WormGPT, as named by its creator, is a new conversational tool based on his GPT-J language model, released in 2021. Trained and developed with the sole purpose of creating and deploying black hat coding and tools.. This promises that users will be able to develop top-notch malware at a fraction of the cost (and knowledge) that was previously required. The tool was tested by cybersecurity company SlashNext. who warned in a blog post “Malicious actors are currently creating their own custom modules similar to ChatGPT, but they are easy to use for malicious purposes.” The service is available with a “suitable” monthly subscription You can: 60 euros per month or 550 euros per year. Even hackers seem to love Software as a Service.
According to WormGPT developers, “This project aims to provide an alternative to ChatGPT. This allows for all sorts of illegal activities and makes it easy to sell online in the future. Everything you can think of related to black hats can be done with WormGPT, giving anyone access to malicious activity without leaving the comfort of their home. ”
Democratization is a very good thing, but it may not be the best when it comes to the proliferation and empowerment of bad actors.
According to the screenshots posted by its creator, WormGPT basically behaves like an unprotected version of ChatGPT, but doesn’t actively try to block conversations at risk. WormGPT appears to be able to generate malware written in Python, providing tips, strategies, and solutions to problems with malware deployment.
SlashNext’s analysis of this tool was disturbing. After instructing agents to compose emails intended to pressure victims into paying fraudulent bills, the results were disturbing. WormGPT crafted an email that was not only highly persuasive, but also strategically crafty, demonstrating advanced phishing and her BEC potential. [Business Email Compromise] Attack. ”
It was only a matter of time before someone took all the good things about open source artificial intelligence (AI) models and turned them around. Developing humorous and boastful expressions for chat-like AI assistants (see BratGPT) is another. Developing trained conversational models based on specific languages and obfuscations on the dark web is yet another. But applying ChatGPT’s world-renowned programming skills solely to developing AI-created malware is another thing altogether.
Of course, it is theoretically possible for WormGPT to become a real honeypot and train such AI agents to create functional malware. Always caught and reliably identifying the sender. I’m not saying that’s what’s happening with WormGPT, but it’s possible. So I recommend anyone using this to check the code one line at a time.
It is important to note that for these privately-developed AI agents, there are few (if any) indications of the general functionality expected of OpenAI’s ChatGPT. Although the technology and tools have improved significantly, training AI agents is still expensive and time-consuming without the right funding (and data). But as companies sprint toward the AI gold rush, costs will continue to plummet, datasets and training methods will improve, and more capable civilian AI agents such as WormGPT and BratGPT will continue to emerge. It is true.
WormGPT may be the first such system to reach mainstream recognition, but it certainly won’t be the last.