A few months have passed since the release of ChatGpt, which took place in November last year, and there are already reports of cyber criminals who have used the system for malicious purposes.
What is ChatGpt
ChatGpt is a platform created by the OpenAI organization that uses Artificial Intelligence to get answers to any type of question that users can ask the system. Access is free, just register on the platform.
They are machine learning models, based on neural networks to analyze and understand a text, whose learning has been optimized through human reinforcement. In the learning phase, therefore, human supervisors asked questions and provided answers (Reinforcement Learning from Human Feedback). The result is long, relevant, and confident answers. Made public a few months ago, it immediately achieved great success due to its ease of use as well as free access.
Once the account has been created and logged in, the user finds himself in front of a very simple command line where he can start writing any request, from creating a recipe to writing a computer code, to an article optimized for SEO. Among the improvements obtained compared to previous models there is precisely the production of code, which is explained and commented.
When AI is used for crime
This ability to create code has proved to be a weak point of the system. In fact, Check Point researchers have already discovered that the model has been used by improvised cybercriminals who created malicious code. In particular, ChatGpt would have been used to create malware that searches for certain files on a computer, copying them and sending them outside.
In addition, the experts used the platform to create phishing emails with an Excel attachment that contained malicious code capable of remotely taking control of a PC, thus demonstrating how dangerous it can be if used for illicit purposes. In fact, they don't believe it will take much time for criminals to move from simple code to developing more advanced malware. All with poor computer skills.