Research

Last Week in GAI Security Research - 07/01/24

Last Week in GAI Security Research - 07/01/24

Highlights from Last Week * 🪱 Synthetic Cancer – Augmenting Worms with LLMs * 🔗 Large Language Models for Link Stealing Attacks Against Graph Neural Networks  * 🧑‍💻 Assessing the Effectiveness of LLMs in Android Application Vulnerability Analysis * 🦠 MALSIGHT: Exploring Malicious Source Code and Benign Pseudocode for Iterative Binary Malware Summarization * 🦜 Poisoned LangChain: Jailbreak LLMs by LangChain
Brandon Dixon
Last Week in GAI Security Research - 06/17/24

Last Week in GAI Security Research - 06/17/24

Highlights from Last Week * 🛍 Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs * 🕵‍♀ Security Vulnerability Detection with Multitask Self-Instructed Fine-Tuning of Large Language Models  * 🚪 A Survey of Backdoor Attacks and Defenses on Large Language Models: Implications for Security Measures  * 🤖 Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents  * ⛳ Dataset
Brandon Dixon