Cybercrime has already set its sights on generative artificial intelligence to create dangerous code and scam messages to deceive users at a meager cost.
We could be very close to a new era of cybercrime, a new period in which the weapons in the hands of hackers, especially malware, are increasingly sophisticated and cheaper. According to the cyber security company Check Point Research, the key to this technological breakthrough would be ChatGPT and, generally, all the other generative artificial intelligence towards which Russian hackers are already looking with great interest.
Hackers Target ChatGPT
In recent days OpenAI, the startup that is developing and training ChatGPT, has announced the start of testing the paid ” pro ” version of its very powerful chatbot,
Stating that those who pay will have ” At least double the responses, “which will be faster thanks to an ” always available ” service.
After a few hours, the forums frequented by Russian cybercriminals were already discussing how to exploit this service, mainly how to use cloned credit cards to pay the monthly subscription.
Check Point Research also found in these online discussions questions about how to bypass the ChatGPT geolocation to use it anonymously and without being tracked.
In a nutshell: hackers, especially the Russian ones (but we wouldn’t be surprised if they weren’t the only ones), have already set their sights on this new technology and are already thinking about how to use it to take advantage of it.
ChatGPT To Write Viruses?
One of the things ChatGPT can do is write code for web pages and applications using different programming languages. At the moment, the OpenAI chatbot does not respond to those who explicitly ask it to create malicious code, but a large part of the malicious code is often used even for good, so ChatGPT writes many things.
For example, he writes code to encrypt data in folders or entire hard drives or to extract large amounts of information from file archives. All things that ” are not enough, but they help ” certainly speed up and make writing a package of dangerous code extremely cheap.
GPT Chat For Phishing?
The most dangerous perspective is not so much of the code but the vector which convinces the user to click, download the infected file and communicate information. Let’s discuss the material, text and graphics necessary to package credible phishing emails and messages.
Here ChatGPT does not set any limits because phishing messages are nothing more than marketing messages to sell a product. Only that the product is a virus or a scam, We did a test, and the result was alarming.
We asked ChatGPT: ” Write an email on behalf of the bank in which you warn the customer that his account has been hacked and that he must click on a link to confirm his credentials and then change them to increase security.