AI-SPM buyer’s guide: 9 security posture management tools to protect your AI infrastructure

Feature
17 Sep 202410 mins

Cybersecurity vendors have started to create or add features to protect enterprises' AI infrastructure. We discuss some of those already on the market or planning to release their full products in 2024.

Diverse Team of Professionals Meeting in Office at Night: Brainstorming IT Programmers Use Computer Together, Talk Strategy, Discuss Planning. Software Engineers Develop Inspirational App Program
Credit: Gorodenkoff / Shutterstock

Widespread adoption of generative AI across businesses has increased the need for contingencies, including AI security software. It is a tall order because AI’s reach into an organization’s infrastructure and data is enormous, meaning that there is a broad spectrum of protective measures required. This is one of the reasons why attackers are drawn to AI abuses.

We examined nine vendors’ tools that handle AI security posture management (AI-SPM). This is an emerging field and unfortunately that means most products are nowhere near as comprehensive or as integrated as they could be. There are at least nine other vendors who are actively working on similar products that will see general release within the next few months.

[ Download our editors’ PDF AI security posture management (AI-SPM) enterprise buyer’s guide today! ]

AI security posture management explained

AI security posture management is an emerging cybersecurity discipline focused on ensuring the integrity and security of AI and machine learning systems. AI-SPM encompasses strategies, tools, and techniques for monitoring, assessing, and enhancing the security of AI models, data, pipelines, applications, and services, even as threats to those entities continually evolve.

In the past, security posture management tools were designed for two different situations: to protect general cloud operations against misconfigurations and abuse, which is the province of cloud security posture management tools; and to protect against data leakage or malware infections, which is the province of data security posture management tools. With the rise of AI and large language models (LLMs), there needs to be a third product category that checks the instances of managed AI cloud services and their SDKs (like Hugging Face’s Transformers or Azure Open AI SDK), and to prevent model abuses.

The early reports about insecure AI usage are grim. A study by Kong found that a majority of those surveyed have found ways around their organization’s restrictions on AI usage, and a quarter of them have no guidelines whatsoever. MITRE has developed a comprehensive database of adversary tactics called Adversarial Threat Landscape for Artificial-Intelligence Systems or ATLAS based on real-world attack observations. MIT researchers have assembled a database of more than 700 AI-related risks that they have observed from various AI sources. Another great source of AI-related attack methods is from the Open Worldwide Application Security Project (OWASP) which last year released its latest Top 10 Large Language Models list of exploits. These are all worthy efforts and security managers should examine them before choosing any AI-SPM product.

For example, the training data used in an AI model could be the subject of an attack or injection of bad data that could be used to manipulate its results, such as what researchers from Jfrog found in February with more than 100 models that could execute code on a victim’s machine to create malicious backdoors.

Why enterprises need AI-SPM

AI-SPMs have been designed to protect enterprise networks and applications from these and other threats. Just like no modern business would assemble a network without an appropriate firewall, AI-SPMs “ensure that AI models stay explainable, fair, accountable, transparent and equitable,” says Forrester analyst Andras Cser to CSO. “Further good security hygiene dictates that AI infrastructure should not be allowed to be used as a steppingstone for hackers for lateral movement, and data exfiltration and should include policies to prevent and fix configuration drift.”

All the AI-SPMs make use of agentless configurations, accessing cloud-based models and leaving data on their existing platforms. This is both a security measure and to avoid moving the massive data repositories involved.

AI-SPM vendors also make use of AI-related mechanisms to classify these vast data collections, to keep track and protect data against potential abuse and attack. Some of them have integrated their AI-SPMs with their existing cloud or data SPMs with rules, compliance checking, best practices and protection policies that bridge all three types of security postures.

Some of the vendors offer more comprehensive solutions that include a variety of AI-related security measures, including protecting AI pipelines and workloads, identifying sensitive data that is referenced by an AI model, examining training data for any alteration by a third-party or external application, and ways that shared AI services and platforms could be compromised.

Some vendors just do a top-level inspection of one or two services from each of the big three cloud platforms’ AI services, (Amazon, for example, has more than a dozen AI-related service offerings) while others (such as Securiti and Protect AI) take a deeper dive and do a more comprehensive examination of AI data from the AI vendors themselves and other model sources.

Finally, some vendors have been active in the open source world, such as Protect AI, which has open source versions of three of its four commercial tools, and Orca’s GOAT, a free learning platform that is based on the OWASP top 10 risks.

Leading AI-SPM vendors and products

We asked nine security vendors to demonstrate their AI-related tools. Vendors approached AI-SPM from different directions. Protect AI built its AI SPM from scratch that is a superset of features found in both data and cloud SPMs. Some, such as Palo Alto Networks, Wiz, Securiti and Orca, are taking leadership positions by integrating and extending their existing posture platforms with comprehensive AI tools. Others like Microsoft, Cyera.io and Varonis are less complete, also extending their data or cloud SPMs. Legit Security extended its general application security tool with AI-SPM. Here are more details about each one.

Cyera.io specializes in data file level classification. They have a DSPM product that has added what you might think of as AI-enriched data link protection as part of the default product’s features. They also offer a specialized module that is used for Microsoft Copilot data scanning that can detect data used by insiders for example.

LegitSecurity began life protecting application workloads and has extended its platform into AI posture management. They examine AI models, code repositories, cryptographic secrets and other AI-related instances and produce risk scores to focus mitigation efforts. For example, you can use their tool to track which users are employing Github’s copilot services or using poorly built or insecure AI models. There are a dozen pre-built policies for tracking AI posture that come pre-installed, and creating new ones is made easier with an interactive module similar to how firewall rules are built in other security products.

Microsoft Defender Cloud Security Posture Management includes AI capabilities as part of its public preview, with a finished product version expected at the end of 2024. AI features are integrated into its cloud SPM offering. Customers will need to set least privilege access to scan AWS’ Bedrock service. It will scan Azure OpenAI and machine learning services as well, which means fewer services than the other vendors on this roundup (Google Cloud support is expected at year end also). It plans to scan private Azure endpoints for misconfigurations and vulnerabilities.

Orca Security has a single multi-purpose security platform that offers both CNAPP and DSPM protection, which makes it quite comprehensive with its added AI features. It can scan more than 50 different AI model sources and collect an asset inventory from common AI tools such as Pytorch and TensorFlow to create a software build of materials. It comes with dozens of best practice security rules that initially focused on compliance. It also has alerts for when sensitive data is detected inside models, including inside training data repositories, and when secrets are being exposed.

Palo Alto Networks last year acquired Dig Security and it is now fully incorporated and rebranded as Prisma Cloud AI-SPM. It supports top-level scans of Amazon, Google Cloud and Azure AI services to discover AI content and can classify and examine model data and secrets and comes with many built-in AI-related policies.

Protect AI has built a comprehensive AI posture tool from scratch. It is composed of a series of separately priced tools to do specific tasks. There are also a series of open source products that are freely available to try out, and you will want to examine these because the tools available for purchase all begin in the $200,000 range for unlimited seats and scans. Guardian is their scanning tool that examines model vulnerabilities and sets up security policies. Radar offers end-to-end visibility and governance features. Recon was acquired from SydeLabs and does automated red teaming and vulnerability detection, specifically targeting the LLM endpoints. Finally, Sightline provides an AI-related vulnerability feed.

Securiti has extended its DSPM product with AI protective features, including a series of pre-built integrated rules and policies that cover both data and AI circumstances, auto discovery of AI models and mapping of data flows. It also includes a deep view of various AWS services.

Varonis has a single multi-purpose security platform that comes from a strong DSPM background and has added AI posture to help development teams in classifying data used in the AI ecosystem, such as scanning for bad AI behavior, leveraging identities improperly, and examining data flows. There are automated remediation processes built into the tool as well. It is not as complete a solution as some of the others in that it doesn’t scan deeply into all of the AWS or Azure AI-related services and doesn’t scan proprietary AI data stores. There is an extra-cost module to scan Microsoft Copilot and they have plans to add other modules for Salesforce Einstein and Google’s Gemini in the near future.

Wiz has a single multi-purpose security platform that comes from a strong posture management (cloud and data) background. Its advanced version has been augmented with a comprehensive AI-related series of policies, detection algorithms, pipeline and model and data scanners. These are assembled into a separate AI dashboard page. It can also detect AI pipeline abuses, map dependencies graphically and suggest remediation steps.

What about AI-SPM pricing?

Pricing and packaging of AI-SPM varies widely. Many vendors offer free trials limited to a month (an option which is also available on the AWS Marketplace). We pointed out the open-source alternatives in the above vendor profiles, that is also a good way to see how the products work.

At the low end is Legit Security’s pricing, which reflects its appsec heritage: $50/mo/developer instance, with quantity discounts available. Most of the other vendors sell the SPMs for six figure annual contracts. As an example, Protect AI sells each of its four modules separately, starting at $225,000/year with discounts on bundles.

Most of the vendors didn’t want to provide pricing directly but have published pricing on AWS’ Marketplace. Securiti prices its product as a service offering at $3 per hour per running instance. Microsoft has a similar per-resource pricing plan for Defender. Palo Alto Networks Prisma Cloud/Enterprise has a scheme involving a complex purchase of “credits” that starts at $18,000 per year.

Varonis has two components, a price per user per protected application and an additional price for resource consumption. For a typical situation with 1,000 users the total could be in the low six figure range annually.

The other AI-SPM vendors have fixed annual contracts as follows with links to their Marketplace descriptions: Cyera, Orca Security ($84,000- $360,000, depending on workload size) and Wiz Advanced ($38,000).

Exit mobile version