The sheer volume of new and unknown threats coming our way — as well as the lack of fully formed risk frameworks for AI — means that red-team continuous monitoring is not only essential but perhaps your only path to security. Credit: PeopleImages.com - Yuri A / Shutterstock AI models present CISOs with evolutionary threats that that we may never fully understand. Their very dynamic nature — continually ingesting new data to develop new capabilities — suggests that the unique threats they are subject to will require red-team testing on an ongoing basis. Especially as the industry awaits the kind of frameworks and guidelines that may help make AI models more intrinsically safe. A key part of the issue for most enterprises is retrieval augmented generation (RAG), an inventive approach that connects AI models with information from business-specific sources to provide more specific context. Supplementing models with data from staff records or customer accounts, for example, is a brilliant way to increase the value that AI models can bring. But the sensitive nature of this data means that RAG can rapidly increase the probability of a data loss event. Unfortunately, the easy answer of just turning the attention of your existing Red Team toward this new vector won’t work. The traditional human approach will fall short because there is a shortage of skilled experts in this field that understand prompt engineering. As noted by Tony Mao, founder of Nullify: “Red-teaming LLMs is often an arduous and time-consuming venture, as many human experts are required to craft malicious prompts to probe LLMs to produce harmful responses. At times, it requires dozens or even hundreds of these experts to write prompts, evaluate the responses, and iteratively develop new prompts.” We’re still at an early stage, and there is much work to do in developing the approaches and tools necessary to perform continuous red-team monitoring of AI models. But it is likely that your AI model Red Team will be another Bot with a co-pilot (in this case, human) to oversee this activity. New, emerging, and unknown threats Cyber teams have not traditionally spent much time with AI nor with AI teams. Their hands have been full dealing with existing threats and cyber risks. In risk management, we call this a “dark corner,” where little light has been shined. Business attraction to (and FOMO about) generative AI is at the highest levels seen for any technology in the past few decades. Because of this, CISOs can’t afford to ignore the threats. A good place to start educating yourself is by reviewing the OWASP Top 10 for AI Machine Learning, currently in draft. (OWASP has also released critical vulnerabilities for LLMs.) For those used to reading OWASP, this is relatively easy to digest. But there are 1000s of AI LLM models; and with this number only increasing, the task is only getting harder. What makes this task more difficult is that the full inventory of AI models in use at any given organization will not likely be managed by IT, let alone the cybersecurity team. This will need to be addressed first to manage this risk effectively. Many of these AI models are indeed “shadow IT” that have been business sourced, developed, and owned. Moreover, employees have taken it upon themselves to experiment with unsanctioned AI — and are feeding sensitive company data into those models. For IT and cybersecurity to now begin paying attention and expose any negative impact on what these models are delivering for various business units can be a highly political action. The Cloud Security Alliance (CSA) also has a new AI Controls working group that has been evaluating AI models and identifying the new threats that come with this environment. This report has just been issued named CSA Large Language Model (LLM) Threats Taxonomy. I personally have been on that working group and would note that there are more than 440 AI model threats thus far identified. Yes, you read that correctly: 440 threats. For cyber staff, this is a new domain of threat hunting almost beyond comprehension that they must add to their to-do list. But first we should ensure that our cyber staff get some training to understand AI models and how they operate. We always want our team to be able “to think like an attacker,” and this is likely to be a new set of brain path waves to connect. Trying to regulate a moving target Global regulatory bodies are already observing these trends and are reacting with new regulations and AI guidelines that have come “thick and fast” onto the scene. We have seen reputable independent bodies such as NISTlaunch its AI Risk Management Frameworkand CISA its Roadmap for AI. Also there have been various governments that have established new guidelines, such as EU AI EthicsGuidelines. The Five Eyes (FVEY) alliance comprising Australia, Canada, New Zealand, the United Kingdom, and the United States have also weighed in and developed Secure AI guidelines, recommendations that are a stretch for most organizations to address but speak volumes of the joint concern that these nations have for this new AI threat. How enterprises can cope To make matters worse, the shortage of cyber talent and an overloaded roadmap aren’t helping. This new world requires new skills missing in most IT shops. Just consider how many staff in IT understand AI models – the answer is not many. Then extend this question to who understands Cybersecurity and AI Models? I already know the answer and it is not pretty. Until enterprises get up to speed there, current best practice include establishing a generative AI standard that includes guidance on how to use AI, and what risks need to be considered. Within large enterprises the focus has been on segmenting generative AI use cases into low risk and medium/high risk. Low-risk cases can proceed with haste. On the other hand, more robust business cases are required for medium- and high-risk examples to ensure the new risks are understood and part of the decision process. As we know, policy and guardrails can help guide your enterprise away from dangerous risks. But there are few examples to date, and risk frameworks usually take one to two years to adopt and be in effect. Until then, most enterprises will have to accept some degree of risk to use AI models. We will see that Red Teams and the tools to enable this will continue to evolve and become available and this will help. But we have to accept that this is a new and complex world, and that there will be severe AI model incidents of data leakage and loss. As these events occur, we should expect that the regulatory scrutiny will only increase and that the CIO and CISO will have added pressure to manage this new risk. More on AII security: Microsoft warns of ‘Skeleton Key’ jailbreak affecting many generative AI models Criminals, too, see productivity gains from AI AI poisoning is a growing threat — is your security regime ready? SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe