Americas

Asia

Oceania

lconstantin
CSO Senior Writer

LLMjacking: How attackers use stolen AWS credentials to enable LLMs and rack up costs for victims

News Analysis
20 Sep 20245 mins

Users of AI cloud services such as Amazon Bedrock are increasingly being targeted by attackers who abuse stolen credentials in a new attack dubbed LLMjacking.

ransomware
Credit: Shutterstock

The ​​black market for access to large language models (LLMs) is growing, with attackers increasingly abusing stolen cloud credentials to query AI runtime services such as Amazon Bedrock in a practice known as LLMjacking, according to research from security firm Sysdig.

Observed API queries suggest that threat actors not only query LLMs that account owners have already deployed on such platforms but also attempt to enable new ones, which could quickly ramp up costs for victims.

“LLMjacking itself is on the rise, with a 10x increase in LLM requests during the month of July and 2x the amount of unique IP addresses engaging in these attacks over the first half of 2024,” Sysdig researchers said in a report. “With the continued progress of LLM development, the first potential cost to victims is monetary, increasing nearly threefold to over $100,000/day when using cutting edge models like Claude 3 Opus.”

Attackers are interested in accessing LLMs without paying and without the normal limits imposed by free services for diverse reasons: role-playing, script generation, image analysis, and text prompts. Sysdig has found evidence that in at least some cases the users who engage in LLMjacking are based in Russia where Western sanctions have placed severe limits on access to LLM chatbots and services provided by Western companies.

“The main language used in the prompts is English (80%) and the second most-used language is Korean (10%), with the rest being Russian, Romanian, German, Spanish, and Japanese,” the researchers said.

Attackers are abusing Bedrock APIs

Amazon Bedrock is an AWS service that allows organizations to easily deploy and use LLMs from multiple AI companies, augment them with their own datasets and build agents and applications around them. The service supports a long list of API actions through which models can be managed and interacted with programmatically.

The most common API actions called by attackers via compromised credentials earlier this year included InvokeModel, InvokeModelStream, Converse, and ConverseStream. However, attackers were also recently observed using PutFoundationModelEntitlement and PutUseCaseForModelAccess, which are used to enable models, along with ListFoundationModels and GetFoundationModelAvailability, in advance in order to detect which models an account has access to.

This means that organizations that have deployed Bedrock but not activated certain models are not safe. The difference in cost between different models can be substantial. For example, for a Claude 2.x model usage the researchers calculated a potential cost of over $46,000 per day but for models such as Claude 3 Opus the cost could be two to three times higher.

The researchers have seen attackers using Claude 3 to generate and improve the code of a script designed to query the model in the first place. The script is designed to continuously interact with the model, generating responses, monitoring for specific content, and saving the results in text files.

“Models being disabled in Bedrock and the requirement for activation should not be considered a security measure,” the researchers warned. “Attackers can and will enable them on your behalf in order to achieve their goals.”

One example is the Converse API, which was announced in May and provides a simplified way for users to interact with Amazon Bedrock models. According to Sysdig, attackers started abusing the API within 30 days of its release. Converse API actions don’t automatically appear in CloudTrail logs, whereas InvokeModel actions do.

Securing LLM credentials and tokens can mitigate LLMjacking

Even if logging is turned on, smart attackers will try to disable it by calling DeleteModelInvocationLoggingConfiguration, which disables the invocation logging for CloudWatch and S3. In other cases, they will check the logging status and will avoid using stolen credentials in order to conceal their activity.

Attackers don’t often call Amazon Bedrock models directly but use third-party services and tools. For example, such is the case of SillyTavern, a frontend application for interacting with LLMs that requires users to provide their own credentials to an LLM service of choice or a proxy service.

“Since this can be expensive, an entire market and ecosystem has developed around access to LLMs,” the researchers warn. “Credentials are sourced in many ways, including being paid for, free trials, and ones that are stolen. Since this access is a valuable commodity, reverse proxy servers are used to keep the credentials safe and controlled.”

Organizations should take steps to ensure their AWS credentials and tokens are not leaked in code repositories, configuration files and other places. They should also practice principles of least privilege with tokens limited to the task for which they were created.

“Continuously evaluate your cloud against best practice posture controls, such as the AWS Foundational Security Best Practices standard,” the Sysdig researchers said. “Monitor your cloud for potentially compromised credentials, unusual activity, unexpected LLM usage, and indicators of active AI threats.”