Stats and Bytes

Stats and Bytes

Share this post

Stats and Bytes
Stats and Bytes
🎩 Top 5 Security and AI Reads - Week #6
Copy link
Facebook
Email
Notes
More

🎩 Top 5 Security and AI Reads - Week #6

AGI is not the north star, LLM cyber evals missing the point, code training data vulnerabilities, GNN defence certification, and LLM package hallucinations.

Josh Collyer's avatar
Josh Collyer
Feb 09, 2025
∙ Paid

Share this post

Stats and Bytes
Stats and Bytes
🎩 Top 5 Security and AI Reads - Week #6
Copy link
Facebook
Email
Notes
More
Share

Welcome to the sixth installment of the Stats and Bytes Top 5 Security and AI Reads weekly newsletter. We're kicking off with a thought-provoking paper challenging the AI community's fixation on AGI as a research goal. Next, we'll examine critical findings on how current LLM security evaluations may be missing real-world attack scenarios. We'll then dive into an analysis of vulnerabilities and licensing issues lurking in LLM training datasets, followed by a novel approach to certifying Graph Neural Networks against adversarial attacks. Finally, we'll wrap up with fascinating research on LLM package hallucinations, revealing how frequently models conjure non-existent code dependencies and what this means for autonomous coding agents.

A detailed technical illustration in cyberpunk style showing a split screen with five interconnected sections: (1) A shattered 'AGI' crystal ball with question marks floating out, (2) A circuit board maze with a red warning sign, (3) A stack of glowing code…

Keep reading with a 7-day free trial

Subscribe to Stats and Bytes to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Josh Collyer
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More