🎩 Top 5 Security and AI Reads - Week #29
Rowhammer attacks on GPUs, space cybersecurity testbed frameworks, architectural backdoors in deep learning, attention-aware prompt injection attacks, and LLM cryptography capability benchmark
Welcome to the twenty-ninth instalment of the Stats and Bytes Top 5 Security and AI Reads weekly We're kicking off with a look at GPUHammer, an attack that brings the notorious RowHammer technique to GPU memories, demonstrating how attackers can break machine learning model performance through strategic bit-flips. Next, we examine an innovative fidelity framework for space cybersecurity test beds that provides a comprehensive approach to building realistic hardware-based security testing environments. We then jump into a thorough survey of architectural backdoors in deep learning systems, offering a detailed taxonomy of vulnerabilities that blend traditional cybersecurity threats with modern AI attack vectors. Following that, we explore an attention-aware prompt injection attack that cleverly circumvents recent fine-tuning-based prompt injection defences by leveraging architectural insights to optimise adversarial prompts. We wrap up with a benchmark evaluating large language models' …
Keep reading with a 7-day free trial
Subscribe to Stats and Bytes to keep reading this post and get 7 days of free access to the full post archives.