π© Top 5 Security and AI Reads - Week #14
MCP tool poisoning attacks, AI cyberattack evaluation frameworks, LLM-managed adaptive honeypots, generative AI evaluation science, and energy-latency attacks in deep learning
Welcome to the fourteenth instalment of the Stats and Bytes Top 5 Security and AI Reads weekly newsletter. We're kicking off with an exploration of tool poisoning attacks against Model Context Protocol (MCP) services, revealing how malicious actors can exploit the gap between what agents and humans can see in tool descriptions. Next, we examine a practical framework for evaluating emerging AI cyberattack capabilities out of Google DeepMind, which adapts established security models to assess how LLMs might drastically reduce costs for bottleneck attack stages. We then jump into "Hypnotic Honey", a cool approach using locally hosted language models to create adaptive honeypots that generate believable command line responses to potential attackers. Following that, we consider the evolving science of generative AI evaluation, highlighting efforts to move beyond simple metrics toward more comprehensive assessment methodologies crucial for security applications. We wrap up with "energy-lateβ¦
Keep reading with a 7-day free trial
Subscribe to Stats and Bytes to keep reading this post and get 7 days of free access to the full post archives.