Generative AI | News, how-tos, features, reviews, and videos
AI frameworks, including Meta’s Llama, are prone to automatic Python deserialization by pickle that could lead to remote code execution.
Microsoft’s ethical AI hackers provide some answers — as well as more questions.
Executives are aggressively pressing for all manner of genAI deployments and experimentation despite knowing the risks — and CISOs are left holding the risk management bag.
Researchers at Google DeepMind and Stanford University have created highly effective AI replicas of more than 1,000 people based on simple interviews.
Large language models (LLMs) are proving to be valuable tools for discovering zero-days, bypassing detection, and writing exploit code — thereby lowering the barrier to entry for pen-testers and attackers alike.
CrowdStrike, Change Healthcare, rising ransomware threats and cyber regulations — here’s what dominated the headlines this year and how CISOs and cyber pros are adapting.
As companies scramble for tougher shields against genAI risks, homomorphic encryption steps into the spotlight, bringing a unique superpower: it can crunch encrypted data without ever cracking it open.
Generative AI is showing growing utility for augmenting security ops, but studies suggest caution is still warranted, as cyber pros raise concerns about rapid adoption.
Prompt injection and supply chain vulnerabilities remain the main LLM vulnerabilities but as the technology evolves new risks come to light including system prompt leakage and misinformation.
Microsoft is allocating $4 million to a new bug bounty program, Zero Day Quest, among other measures to enhance software security announced at its annual Ignite event.