Stats and Bytes

Stats and Bytes

Share this post

Stats and Bytes
Stats and Bytes
🎩 Top 5 Security and AI Reads - Week #14
Copy link
Facebook
Email
Notes
More

🎩 Top 5 Security and AI Reads - Week #14

MCP tool poisoning attacks, AI cyberattack evaluation frameworks, LLM-managed adaptive honeypots, generative AI evaluation science, and energy-latency attacks in deep learning

Apr 06, 2025
βˆ™ Paid

Share this post

Stats and Bytes
Stats and Bytes
🎩 Top 5 Security and AI Reads - Week #14
Copy link
Facebook
Email
Notes
More
1
Share

Welcome to the fourteenth instalment of the Stats and Bytes Top 5 Security and AI Reads weekly newsletter. We're kicking off with an exploration of tool poisoning attacks against Model Context Protocol (MCP) services, revealing how malicious actors can exploit the gap between what agents and humans can see in tool descriptions. Next, we examine a practical framework for evaluating emerging AI cyberattack capabilities out of Google DeepMind, which adapts established security models to assess how LLMs might drastically reduce costs for bottleneck attack stages. We then jump into "Hypnotic Honey", a cool approach using locally hosted language models to create adaptive honeypots that generate believable command line responses to potential attackers. Following that, we consider the evolving science of generative AI evaluation, highlighting efforts to move beyond simple metrics toward more comprehensive assessment methodologies crucial for security applications. We wrap up with "energy-late…

Keep reading with a 7-day free trial

Subscribe to Stats and Bytes to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
Β© 2025 Josh Collyer
Privacy βˆ™ Terms βˆ™ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More