Generative AI | News, how-tos, features, reviews, and videos
Prompt injection, prompt extraction, new phishing schemes, and poisoned models are the most likely risks organizations face when using large language models.
Global regulatory efforts focused on generative AI have taken a wide range of approaches, but more guidance needed on permissible uses of the technology.
Businesses are finding more and more compelling reasons to use generative AI, which is making the development of security-focused generative AI policies more critical than ever.
Patched in the latest version of MLflow, the flaw allows attackers to steal or poison sensitive training data when a developer visits a random website on the internet.
This year's annual national defense funding bill is chock-full of cybersecurity-related provisions with spending focused on nuclear weapons and systems security, artificial intelligence, digital diplomacy, and much more.
The next few years will see AI tip the scales back and forth between threat actors and security teams protecting the enterprise. Collaboration with government is key to the tech industry coming out ahead.
Almost four-fifths of the surveyed organizations had already adopted AI in their production, with only a few still testing the technology.
Critical infrastructure and other high-risk organizations will need to do AI risk assessments and adhere to cybersecurity standards.
Three defining concerns associated with the security of AI include trust in AI, ethical application of AI, and cybersecurity of AI, according to the SIA research for cybersecurity megatrends in 2024.
You can try to keep the flood of generative AI at bay but embracing it with proper vigilance is likely the best hope to maintain control and prevent the scourge of it becoming shadow AI.