Latest

Last Week in GAI Security Research - 09/23/24

Last Week in GAI Security Research - 09/23/24

Highlights from Last Week * 🧮 Jailbreaking Large Language Models with Symbolic Mathematics * ❇ AutoSafeCoder: A Multi-Agent Framework for Securing LLM Code Generation through Static Analysis and Fuzz Testing * 📨 Towards Novel Malicious Packet Recognition: A Few-Shot Learning Approach * 🧑‍💻 Hacking, The Lazy Way: LLM Augmented Pentesting * 📝 CoCA: Regaining Safety-awareness of Multimodal Large Language Models
Brandon Dixon
Last Week in GAI Security Research - 08/26/24

Last Week in GAI Security Research - 08/26/24

Highlights from Last Week * 👮‍♂ MMJ-Bench: A Comprehensive Study on Jailbreak Attacks and Defenses for Vision Language Models * ⚠️ While GitHub Copilot Excels at Coding, Does It Ensure Responsible Output?  * 🔐 An Exploratory Study on Fine-Tuning Large Language Models for Secure Code Generation * 🤖 CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher  * 🦮 Perception-guided Jailbreak
Brandon Dixon