Check Point Software Technologies Ltd is already detecting the first cases of cybercriminals using ChatGPT to develop malicious tools. In underground forums on the Dark Web, cybercriminals are creating Infostealer, encryption tools, and facilitating fraudulent activity. The researchers want to warn of the growing interest of attackers for ChatGPT.
There has been much discussion about artificial intelligence (AI), a disruptive technology with the potential to dramatically improve our lives through personalized medicine and safer transportation, among other uses.
Must Read: What are the Challenges Ahead For Metaverse
It has great potential to help the cybersecurity industry speed up the development of new protection tools and validate some aspects of secure coding. However, introducing this new technology also carries a potential risk that must be considered.
The world experienced a 38% increase in cyberattacks in 2022 (compared to 2021). Businesses, on average, were attacked 1,168 times per week. Education and health were two of the most attacked sectors that have paralyzed hospitals and schools. We may now see an exponential increase in cyberattacks due to ChatGPT and other AI models.
CheckPoint Research explains three recent cases that show this danger and the growing interest of cybercriminals in ChatGPT to escalate and expose malicious activities:
Infostealer Creation: On December 29, 2022, a thread called “ChatGPT – Malware Benefits” appeared on a popular underground hacking forum. The editor of the thread revealed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and writings on common malware.
These messages taught other, less technically-savvy attackers how to use ChatGPT for malicious purposes, with real-life examples they could immediately apply.
Creating a multi-layer encryption tool: On December 21, 2022, a cybercriminal nicknamed USDoD published a Python script that he referred to as his “first-script ever created.” When another cybercriminal commented that the code style resembled OpenAI code, USDoD confirmed that OpenAI gave it a “good [hand] to finish the script with a good scope.”
This could mean that potential cybercriminals with little or no development skills could use ChatGPT to develop malicious tools and become attackers with technical skills. Of course, all of the above codes can be used benignly. However, this script can be modified to encrypt a computer without user interaction. For example, you could turn the code into ransomware.
Facilitating ChatGPT for fraudulent activities: In this case, a cybercriminal demonstrates how to create a marketplace for scripts on the Dark Web using ChatGPT. The primary role of the market in the illicit underground economy is to provide a platform for automated trading of illegal or stolen goods, such as stolen payment accounts or cards, malware, or even drugs and ammunition, with all payments being made in cryptocurrency.
Must Read: Keys to Detect Fraud On the Internet