Criminals Are Using ChatGPT To Conduct Phishing Schemes, Cyber Attacks
OpenAI's viral chatbot ChatGPT has shaken up the tech world, becoming one of, if not, the most used applications in the world almost overnight. Among those actively using the artificial intelligence (AI) tool, however, are criminals, who are leveraging its text-generating capabilities to conduct phishing schemes and cyber attacks.
GBHackers, a cybersecurity platform, reported that threat actors are using ChatGPT to write phishing emails for stealing personal information, create multi-layer encryption tools for extortion and generate attack scripts for identity theft.
Even those who don't have coding experience can now employ the AI to create malicious software in the Python programming language, for example, which can be used to remotely lock victims' out of their devices.
Threat actors also use ChatGPT to steal victims' sensitive information by tricking them into thinking they're communicating and falling in love with a real person. They essentially create a virtual character for the AI to introduce itself as to potential victims, with it writing the chat messages and love letters to be sent in mere seconds. Once the victims' guards are down, the malicious links are then sent.
Phishing schemes aren't new, but with these new AI tools being able to write like humans, it becomes more difficult to distinguish a malicious message from one that isn't. AI-written messages also have a strong chance of bypassing security systems as they don't necessarily follow a template and, as previously mentioned, can resemble human writing. Cybersecurity software provider McAfee in a study found that more than two-thirds of the respondents weren't able to tell that the sample love letter they read was written by an AI, not a human.
What's perhaps even more worrying is that a generative AI can write messages in such short periods of time, enabling threat actors to streamline and scale how they conduct their criminal activities. That's perhaps the prime reason why they're so attracted to ChatGPT.
A number of criminals have also managed to create applications that break through ChatGPT's security restrictions, which are there to prevent the AI tool from being used for illegal means, according to a security expert from IT company Check Point. And they're selling those applications on the dark web. Breaking through the restrictions allow the criminals to access ChatGPT's application programming interface (API) – a part of the applications that's reserved for developers – to integrate the GPT model into their own malicious software. For example, they have already found a way to integrate the GPT-3 model on Telegram, which allows them to bypass the platform's security supervision.
Apart from these, ChatGPT has reportedly been used to help develop ransomware and generate SIM swap attack scripts. A SIM swap attack involves threat actors hacking victims' SIM and exchanging its phone number to one that they control.
ChatGPT's potential for crime has caught the attention of U.S. lawmakers, who now hope to regulate the emerging technology for it to have a more positive impact rather than negative. One such lawmaker is California Democrat Ted Lieu. He's pushing the House to establish a commission that would guide the creation of policies regulating AI technologies. The U.S. Senate is similarly calling on the House to strengthen supervision of ChatGPT-like tools.
"As one of just three members of Congress with a computer science degree, I am enthralled by A.I. and excited about the incredible ways it will continue to advance society. And as a member of Congress, I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated," wrote Lieu in a New York Times op-ed.
ChatGPT itself approves of working with lawmakers to bring AI technologies to the public in a responsible manner, with CTO Mira Murati telling Time that regulations could give the startup a better chance at competing with giants like Google.
The challenge perhaps is in making sure the regulations don't lag behind the technology. Many point to the social media bubble from the early 2000s as the prime example of this. If regulations were put in place earlier in the lifespans of Facebook and Twitter, among others, social media giants could have been held accountable for the harms they brought. The problem is not all lawmakers want are in favour of regulation, with some saying it stifles innovation, while others either don't have or have a basic understanding of the technologies involved.
Threat actors are now using to write phishing emails for stealing personal information, create multi-layer encryption tools for extortion and generate attack scripts for identity theft.
Phishing schemes aren't new, but with these new AI tools being able to write like humans, it becomes more difficult to distinguish a malicious message from one that isn't.
These generative AI tools can write messages in such short periods of time, enabling threat actors to streamline and scale how they conduct their criminal activities.
ChatGPT's potential for crime has caught the attention of U.S. lawmakers, who now hope to regulate the emerging technology for it to have a more positive impact rather than negative.