Artificial Intelligence | News, how-tos, features, reviews, and videos
MITRE’s ATLAS threat landscape knowledge base for artificial intelligence is a comprehensive guide to the tactics and processes bad actors use to compromise and exploit AI systems.
The true determinant of success will be how well each side harnesses this powerful tool to outmaneuver the other in the ongoing cybersecurity arms race.
Legal documents, HR data, source code, and other sensitive corporate information is being fed into unlicensed, publicly available AIs at a swift rate, leaving IT leaders with a mounting shadow AI mess.
OWASP’s checklist provides a concise and quick resource to help organizations and security leaders deal with generative AI and LLMs.
Generative AI could be the holy grail of DevSecOps, from writing secure code and documentation to creating tests. But it could be a major point of failure if not used correctly.
By automating repetitive triage and documentation tasks, generative AI systems allow entry-level security analysts to spend more time on investigations, response, and developing core skills.
Generative AI can create fake documents and personal histories that fool common know-your-customer authentication practices.
Your existing cloud security practices, platforms, and tools will only go so far in protecting the organization from threats inherent to the use of AI's large language models.
Risks associated with artificial intelligence have grown with the use of generative AI and companies must first understand their risk to create the best protection plan.
Prompt injection, prompt extraction, new phishing schemes, and poisoned models are the most likely risks organizations face when using large language models.