Malicious actors are weaponizing the hype around AI tools.
AI tools are currently popular. Everyone is obsessed with it, including hackers.
According to a new report from Facebook’s parent company Meta, the company’s security team is on the lookout for new malware threats, including those that weaponize the current AI trend.
“Over the past several months, we’ve investigated and taken action against malware strains taking advantage of people’s interest in OpenAI’s ChatGPT to trick them into installing malware pretending to provide AI functionality,” Meta writes in a new security report released by the company.
Meta claims to have discovered “around ten new malware families” that are hacking into users’ accounts via AI chatbot tools such as OpenAI’s popular ChatGPT.
According to Meta, one of the more pressing schemes is the proliferation of malicious web browser extensions that appear to offer ChatGPT functionality. To use AI chatbot functionality, users download these extensions for Chrome or Firefox, for example. Some of these extensions actually work and provide the advertised chatbot functionality. However, the extensions contain malware that can gain access to the user’s device.
Meta claims to have discovered over 1,000 unique URLs offering malware disguised as ChatGPT or other AI-related tools and has blocked them from being shared on Facebook, Instagram, and Whatsapp.
According to Meta, once a user downloads malware, bad actors can launch an attack immediately and are constantly updating their methods to circumvent security protocols. In one case, bad actors were able to quickly automate the process of taking over business accounts and granting these bad actors advertising permissions.
According to Meta, it has reported malicious links to the various domain registrars and hosting providers that these bad actors use.
Meta security researchers delve into the more technical aspects of recent malware, such as Ducktail and NodeStealer, in their report. This report can be read in its entirety.