bt_bb_section_bottom_section_coverage_image

ChatGPT Can be Tricked to Create Dangerous Malwares!

14th April 2023
Original source: gizchina.com

In an age where artificial intelligence (AI) and machine learning have become increasingly integrated into our daily lives, the potential for misuse has also risen. A recent example demonstrates just how ChatGPT can be tricked to create something very dangerous.

CHATGPT IS CAPABLE OF CREATING ADVANCED MALWARE AND POSES A SIGNIFICANT THREAT

Aaron Mulgrew, a self proclaimed novice and security researcher at Forcepoint, tested the limits of ChatGPT’s capabilities. He discovered a loophole that allowed him to create sophisticated, zero day malware within just a few hours. This feat is particularly noteworthy given the fact that Mulgrew had no prior experience in coding.

OpenAI has implemented safeguards to prevent users from prompting ChatGPT to write malicious code. However, Mulgrew was able to bypass these protections by asking the chatbot to generate individual lines of malicious code, focusing on separate functions. After compiling the various functions, Mulgrew ended up with a highly advanced data stealing executable that was almost impossible to detect.

Mulgrew created his malware single handedly, unlike traditional malware that requires teams of hackers and substantial resources, and in a fraction of the time. This situation emphasizes the potential risks related to AI-powered tools like ChatGPT. It also raises questions about their safety and how easily they can be exploited.

THE CHATGPT MALWARE: A CLOSER LOOK

It is not new that ChatGPT can be tricked to something. Mulgrew’s malware disguises itself as a screensaver application with an SCR extension. When launched on a Windows system, the malware sifts through files, such as images, Word documents, and PDFs, to find valuable data to steal.

One of the most impressive aspects of this malware is its use of steganography, a technique that allows it to break down the stolen data into smaller fragments and hide them within images on the infected computer. The user uploads these images to a Google Drive folder, and this process effectively evades detection by security software.

Mulgrew showed how easy it was to refine and strengthen the code against detection using simple prompts on ChatGPT. In early tests using VirusTotal, the malware was detected initially by only five out of 69 detection products. However, a later version of the code went completely undetected.

Original source: gizchina.com

 

Share
× WhatsApp