Latest

Last Week in GAI Security Research - 08/05/24

Last Week in GAI Security Research - 08/05/24

Highlights from Last Week * 🧑‍⚖ Jailbreaking Text-to-Image Models with LLM-Based Agents * 🎣 From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks * 🤖 The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies * 🔊 Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification  * 🏋🏼 Tamper-Resistant Safeguards for
Brandon Dixon
Last Week in GAI Security Research - 07/29/24

Last Week in GAI Security Research - 07/29/24

Highlights from Last Week * 🔴 RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent * 🩺 CVE-LLM : Automatic vulnerability evaluation in medical device industry using large language models * ❤‍🩹 PenHeal: A Two-Stage LLM Framework for Automated Pentesting and Optimal Remediation * 📚 Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) * 🖐🏻 LLMmap: Fingerprinting
Brandon Dixon
Last Week in GAI Security Research - 07/01/24

Last Week in GAI Security Research - 07/01/24

Highlights from Last Week * 🪱 Synthetic Cancer – Augmenting Worms with LLMs * 🔗 Large Language Models for Link Stealing Attacks Against Graph Neural Networks  * 🧑‍💻 Assessing the Effectiveness of LLMs in Android Application Vulnerability Analysis * 🦠 MALSIGHT: Exploring Malicious Source Code and Benign Pseudocode for Iterative Binary Malware Summarization * 🦜 Poisoned LangChain: Jailbreak LLMs by LangChain
Brandon Dixon