Americas

Asia

Oceania

Criminals, too, see productivity gains from AI

News
12 Jun 20246 mins
Generative AIThreat and Vulnerability Management

A new study looks at how criminals are using AI to further their goals. Bottom line: It’s disturbing.

Cyber criminals are beginning to use artificial intelligence to make their operations more effective — and their use goes way beyond creating better bait for phishing.

Just as in legitimate business, discussions about AI among criminals have accelerated this year compared to 2023, researchers from cybersecurity group Intel 471 reported in a new study published Wednesday, Cybercriminals and AI: Not Just Better Phishing.

Threat actors are closely watching, and they’re experimenting, the researchers said. Some threat actors are claiming to use AI for activities such as creation of deepfake videos, defeating facial recognition, and summarizing data stolen in data breaches. Others are building AI into their hacking tools or creating malicious chatbots.

However, the study said, “Perhaps the most observed impact AI has had on cybercrime has been an increase in scams, particularly those leveraging deepfake technology.”

Some of those scams have cost lives, the study said. For example, a group of cyber criminals known as the Yahoo Boys, primarily based in Nigeria, use deepfakes in romance and sextortion scams, gaining victims’ confidence with fake personas. They often persuade those victims to share compromising photos, which they then threaten to make public unless they’re paid. Intel 471 said that many of the targeted victims are minors and in some cases they have committed suicide.

Deepfake offerings have increased significantly since January 2023, the study said, and they have become less expensive. One threat actor claimed to generate audio and video deepfakes using an AI tool for between US$60 and US$400 per minute, depending on complexity, a bargain compared to 2023 prices. Other bad actors’ offerings include a subscription service costing US$999 per year for 300 face swaps per day in images and videos.

Others are using AI in business email compromise (BEC) scams and document fraud. One of them, the study said, allegedly developed a tool using AI to manipulate invoices, intercepting communications between parties, and altering information such as bank account numbers to redirect payments to the scammers.

Productivity gains

“The invoice manipulation tool allegedly has a range of functionality, including the ability to detect and edit all portable document file (PDF) documents and swap international bank account numbers (IBANs) and bank identification codes (BICs),” the study said. “The tool is offered on a subscription basis for US$5,000 per month or US$15,000 for lifetime access. If this tool works as promised, this fulfills an often-cited use case of AI for productivity gains, albeit here in a criminal context.”

Another criminal claims to use Meta’s Llama large language model (LLM) to extract the most sensitive data from the fruits of a data breach to use in pressuring the victim to pay ransom.

However, noted Jeremy Kirk, analyst at Intel 471, not all claims of AI use may be accurate. “We use the word ‘purportedly’ to represent that it is a claim being made by a threat actor and that it is frequently unclear exactly to what extent AI has been incorporated into a product, what LLM model is being used, and so forth,” he said in an email. “As far as whether developers of cybercriminal tools are jumping on the bandwagon for a commercial benefit, there seem to be genuine efforts to see how AI can help in cybercriminal activity. Underground markets are competitive, and there is often more than one vendor for a particular service or product. It is to their commercial advantage to have their product work better than another, and AI might help.”

Intel 471 has observed many claims that are in doubt, including one by four University of Illinois Urbana-Champaign (UIUC) computer scientists who claim to have used OpenAI’s GPT-4 LLM to autonomously exploit vulnerabilities in real-world systems by feeding the LLM common vulnerabilities and exposures (CVE) advisories describing flaws. However, the study pointed out, “Because many of the key elements of the study were not published — such as the agent code, prompts or the output of the model — it can’t be accurately reproduced by other researchers, again inviting skepticism.”

Automation

Other threat actors offered tools that scrape and summarize CVE data, and a tool integrating what Intel 471 called a well-known AI model into a multipurpose hacking tool that allegedly does everything from scanning networks and looking for vulnerabilities in content management systems to coding malicious scripts.

The study’s authors also highlighted some of the new risks emerging as AI use grows, such as the generation of recommendations from Google’s new AI-powered Search Generative Experience directing users to malicious sites, and vulnerabilities in AI applications. In addition, nation-states and other malicious entities were observed using LLMs for multiple kinds of attack. The study cited public blog posts from Microsoft and OpenAI that specifically identified five state-sponsored groups, one each from Russia, North Korea, and Iran, and two from China.

To counter this, the study noted, government agencies including the US Federal Communications Commission, the US Department of Homeland Security, the UK government, and others are initiating measures to monitor and regulate AI to ensure its safety and security.

It will only get worse

Intel 471 concluded that, although AI had only played what it called “a small supporting role” in cybercrime in the past, the technology’s role has grown. Its analysts expect that deepfakes, phishing, and BEC activity will increase, along with disinformation campaigns fueled by LLMs’ ability to generate content.

And, the company added, “The security landscape will dramatically change when an LLM can find a vulnerability, write and test the exploit code and then autonomously exploit vulnerabilities in the wild.”

“Machine learning and technology dubbed as AI have been circulating in the security industry for a long time, from fighting spam to detecting malware,” Kirk said. “At a minimum, AI could aid in faster attacks but also in faster defenses. There will be times when attackers get the upper hand and defenders are catching up, but that is not unlike where we are now.

“How cybercriminals can use AI will also depend on the availability of LLMs and AI models with fewer guardrails and allow prompts for information or code that could help in malicious uses.”