ChatGPT Can Generate Mutating Malware That Evades Modern Security Techniques
ChatGPT was able to create something funny and hilarious with the right people, such as: bigmouth billy bass plan. But there is a much darker side to AI, and it could be used to create very complex problems in the future of IT. Several IT professionals recently outlined the dangerous potential of ChatGPT and its ability to create polymorphic malware that is nearly impossible to catch using endpoint detection and response (EDR). .
EDR is a type of cybersecurity technology that can be deployed to catch malicious software. However, experts suggest that this traditional protocol is no match for the potential harm that ChatGPT can cause. Code that can change (this is where the term polymorphic comes in) is much harder to detect.
Most language learning models (LLMs) like ChatGPT are designed with good filters to avoid producing inappropriate content determined by the author. This ranges from specific topics to malicious code in this case. However, it didn’t take long for users to figure out how to get around these filters. This tactic makes ChatGPT particularly vulnerable to individuals trying to create malicious scripts.
Jeff Sims is a security engineer at HYAS InfoSec, a company focused on IT security. Back in March, Sims released a white paper detailing what he calls a proof-of-concept project. black mamba. This application is a type of polymorphic keylogger that uses an API to send a request to ChatGPT each time it runs.
“Using these new techniques, attackers can combine a series of normally highly detectable behaviors in unusual combinations and take advantage of what models fail to recognize as malicious patterns to evade detection.” Sims explains.
another cybersecurity company, cyber arcRecently, a blog post by Eran Shimany and Omer Tsarfati demonstrated ChatGPT’s ability to create this type of polymorphic malware. This post describes how code injection from ChatGPT requests allows you to circumvent modern techniques used to detect malicious behavior and modify activated scripts.
At this time, these examples are provided only as a proof of concept, but we hope that this realization will lead to further developments to prevent the harm that this kind of modified code can cause in real-world environments. .