Stats and Bytes

Stats and Bytes

Share this post

Stats and Bytes
Stats and Bytes
🎩 Top 5 Security and AI Reads - Week #29

🎩 Top 5 Security and AI Reads - Week #29

Rowhammer attacks on GPUs, space cybersecurity testbed frameworks, architectural backdoors in deep learning, attention-aware prompt injection attacks, and LLM cryptography capability benchmark

Jul 20, 2025
∙ Paid

Share this post

Stats and Bytes
Stats and Bytes
🎩 Top 5 Security and AI Reads - Week #29
Share

Welcome to the twenty-ninth instalment of the Stats and Bytes Top 5 Security and AI Reads weekly We're kicking off with a look at GPUHammer, an attack that brings the notorious RowHammer technique to GPU memories, demonstrating how attackers can break machine learning model performance through strategic bit-flips. Next, we examine an innovative fidelity framework for space cybersecurity test beds that provides a comprehensive approach to building realistic hardware-based security testing environments. We then jump into a thorough survey of architectural backdoors in deep learning systems, offering a detailed taxonomy of vulnerabilities that blend traditional cybersecurity threats with modern AI attack vectors. Following that, we explore an attention-aware prompt injection attack that cleverly circumvents recent fine-tuning-based prompt injection defences by leveraging architectural insights to optimise adversarial prompts. We wrap up with a benchmark evaluating large language models' …

Keep reading with a 7-day free trial

Subscribe to Stats and Bytes to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Josh Collyer
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share