Many organizations are now exploring the use of genAI for cybersecurity, but what are some things to consider before taking the plunge? Credit: Ground Picture / Shutterstock At the start of this year, I shared my view that 2024 would be when security practitioners bridge the cyber divide — the use of artificial intelligence (AI) for cybersecurity will be on the rise and technology providers will increasingly integrate generative AI (genAI) into their cybersecurity products and services, as well as leverage AI in adversarial simulations and countermeasures against deepfakes and other attacks. Fast-forward to the last quarter of the year, many cybersecurity departments of organizations have now jumped on the bandwagon to explore the use of genAI for cybersecurity. Predominantly, the goal is to expedite the detection, containment, and eradication phases in incident management — as well as to improve risk assessment. There is a lot of focus on detection in particular, where genAI can be leveraged to help determine the root cause of an incident to facilitate containment and eradication. Unfortunately, some vendors – in a bid to gain first-mover advantage — try to ride the genAI wave without first putting in place the appropriate guard rails. Time to market is sped up and they deal with potential product issues by kicking the can down the road — instead of fixing them before they go to market, they shift improvements into the product roadmap. Meanwhile, I believe users are progressing from the Gartner Hype Cycle’s initial Peak of Inflated Expectations into the Trough of Disillusionment with genAI solutions — before they find the sweet spot to align expectations with reality. Having tested and evaluated some genAI solutions for cybersecurity, I’ve observed three key points that users should consider — these may be useful for security departments that are at the start of their genAI journey. Supplied 1. Usage confidence Usage confidence is about the reliability of an output from a prompt or prompt book. Due to risk of hallucinations, vendors often include caveats stating that users need to always verify the output. When approached, some vendors were unable to confidently stand behind specific prompts or playbooks. Yet, they claim that genAI can help organisations achieve “machine speed.” Hence, when evaluating genAI solutions in the field of cybersecurity, it is important to ascertain which outputs can be confidently relied upon and which ones require verification. Ultimately, in taking an assumed breach approach, incident management is about the ability to respond fast. If the output is unreliable, false positives and false negatives will introduce delays and divert resources from true positives. Security teams also depend on the accuracy and completeness of generated summaries. It is therefore essential to gain clarity from vendors on how accurate and reliable their genAI solutions are. 2. Usage friction Writing good prompts have become more of an art than a science. To obtain the desired output, sometimes multiple adjustments and iterations are required. In addition, some genAI solutions also struggle with ad-hoc and open-ended security queries — which affect a user’s ability to obtain quick and accurate results, negating the desired capability of solving a problem at “machine speed”. In some cases, genAI solutions have not yet extended their reach and integrated a comprehensive set of log sources. As a result, the completeness and accuracy of the output is put into question, discouraging its use. Furthermore, usage friction is further exacerbated when the use of a genAI prompt is based on a utility charging model — this leads to security staff becoming hesitant to use prompts if they are made answerable for each use. Therefore, it is important to recognise the potential triggers of usage friction and address them so that genAI solutions can be adopted effectively. 3. Usage governance Lastly, some vendors may charge users based on activation of the genAI feature. Like turning on a tap, charges may mount if you forget to turn it off — regardless of whether you use it. If this happens, then governance structures are required to avoid wastage. Ideally, role-based access controls and proper accounting — at the very least — need to be implemented as guard rails. But whatever the governance guard rails may be, adopting a utility pricing model should eventually allow for the elasticity we see in cloud computing. Users therefore need to ascertain the maturity of a governance structure to avoid misuse and wastage. In a nutshell, technology buyers should consider these three points when faced with an onslaught of genAI solutions. Eventually — as with cloud computing — the adoption of genAI in cybersecurity will mature and we will exit the Trough of Disillusionment and progress onto the Slope of Enlightenment, before eventually reaching the sweet spot on the Plateau of Productivity on the Gartner Hype Cycle. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe