Last Week in GAI Security Research - 05/13/24
Last week, over 40,000 security professionals descended upon San Francisco, CA for the RSA Conference, themed "the art of the possible." In previous years, the expo floor buzzed with over-hyped technologies that often became the main marketing pitches for the hundreds of vendors present. This year, however, felt more like business as usual. Although some companies still promoted AI in ways that seemed impractical or added little value to their products, many focused on their core strengths. The early stage area of the expo hall showcased the most generative AI, highlighting the creative ways organizations are trying to utilize this technology. Heading into this week, I'm more excited than ever about my role in helping to shape how security is transformed. β bsd
Highlights from Last Week
- π‘ PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented Property Generation
- π§ AttacKG+:Boosting Attack Knowledge Graph Construction with Large Language Models
- πΆβπ«οΈ Air Gap: Protecting Privacy-Conscious Conversational Agents
- π₯·π» Large Language Models for Cyber Security: A Systematic Literature Review
- π Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities
Partner Content
Pillar Security is the security stack for AI teams. Fortify the entire AI application development lifecycle while helping Security teams regain visibility and visibility control.
- Gain complete oversight of your AI inventory. Audit usage, app interactions, inputs, outputs, meta-prompts, user sessions, models and tools with full transparency.
- Safeguard your apps with enterprise-grade low-latency security and safety guardrails. Detect and prevent attacks that can affect your users, data and AI-app integrity.
- Assess and reduce risk by continuously stress-testing your AI apps with automated security and safety evaluations. Enhance resilience against novel attacks and stay ahead of emerging threats.
π‘ PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented Property Generation (http://arxiv.org/pdf/2405.02580v1.pdf)
- PropertyGPT achieved an 80% recall in generating properties for smart contracts with a precision of 64%, indicating a high rate of correctly identifying useful properties from a set of possibilities.
- The tool detected vulnerabilities in 9 out of 13 CVEs (Common Vulnerabilities and Exposures) and outperformed existing tools like GPTScan and Slither in identifying smart contract errors.
- PropertyGPT successfully ran four bounty projects, generating 22 bug findings of which 12 were confirmed and fixed, earning $8,256 in bug bounty rewards, demonstrating real-world applicability and effectiveness.
π§ AttacKG+:Boosting Attack Knowledge Graph Construction with Large Language Models (http://arxiv.org/pdf/2405.04753v1.pdf)
- The AttacKG+ framework, leveraging Large Language Models (LLMs), significantly outperformed existing CTI parsing methods in terms of extracting attack techniques, entities, and relationships, showcasing LLMs' potential in enhancing cyber threat intelligence analysis.
- Despite the advance, limitations were identified, including the model's struggle to generalize across diverse attack scenarios due to limited training data and the inherent difficulty in extracting nuanced threat information from unstructured texts.
- Through an automated, multi-layered knowledge schema, AttacKG+ not only improved the precision and recall in identifying cyber attack techniques but also provided a comprehensive temporal and behavioral understanding of attack processes, reflecting a significant step forward in attack knowledge graph construction.
πΆβπ«οΈ Air Gap: Protecting Privacy-Conscious Conversational Agents (http://arxiv.org/pdf/2405.05175v1.pdf)
- The AirGapAgent design significantly reduces the risk of data exfiltration in conversational agents, offering a 97% protection rate against context hijacking attacks by isolating user data from potential adversaries.
- Large language models like Gemini GPT and Mistral show vulnerability to single-query context hijacking attacks, with the Gemini Ultra model revealing up to 55% of data, highlighting the need for improved privacy measures in conversational agents.
- The use of synthetic user profiles in experiments demonstrates a novel approach to evaluating the privacy and utility of conversational agents, presenting a conflict between maintaining data privacy and the functional utility of agents.
π₯·π» Large Language Models for Cyber Security: A Systematic Literature Review (http://arxiv.org/pdf/2405.04760v2.pdf)
- Large Language Models (LLMs) have shown promise in improving cybersecurity tasks like vulnerability detection, malware analysis, and phishing detection with opportunities for future research in data privacy and leveraging LLMs for proactive defense.
- There is a significant increase in the publication and interest in leveraging LLMs for cybersecurity, growing from 1 paper in 2020 to over 109 papers by 2023, indicating a rapid growth trend in the field.
- The research highlights the need for comprehensive datasets and techniques for adapting LLMs to cybersecurity domains, with particular challenges in fine-tuning and transfer learning due to the specificity and complexity of security-related tasks.
π Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities (http://arxiv.org/pdf/2405.04874v1.pdf)
- The deployment of Generative AI and Large Language Models (LLMs) in Critical Infrastructure Protection (CIP) introduces both opportunities and challenges for enhancing security and resilience against cyber-attacks.
- Innovative security measures, including advanced encryption standards, Blockchain, and Quantization techniques, are pivotal for safeguarding Critical National Infrastructure (CNI) against growing cyber threats.
- Enhancing Co-operation and Information Sharing across industries and international borders is essential for a resilient defensive posture against cross-border cybersecurity threats.
Other Interesting Research
- Can LLMs Deeply Detect Complex Malicious Queries? A Framework for Jailbreaking via Obfuscating Intent (http://arxiv.org/pdf/2405.03654v2.pdf) - IntentObfuscator reveals critical vulnerabilities in LLM's security frameworks, achieving high success rates in bypassing content restrictions.
- Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks (http://arxiv.org/pdf/2405.04403v1.pdf) - Visual instruction tuning enhances LLMs' abilities but increases vulnerability to generating harmful content, highlighting the need for comprehensive safety measures.
- Mitigating Exaggerated Safety in Large Language Models (http://arxiv.org/pdf/2405.05418v1.pdf) - Interactive, contextual, and few-shot prompting strategies effectively reduce exaggerated safety behaviors in LLMs, significantly improving their decision-making accuracy and utility while navigating complex content distinctions.
- Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness (http://arxiv.org/pdf/2405.05930v1.pdf) - TrustGAIN addresses the need for trustworthy AI-generated content in 6G networks by focusing on confidentiality, integrity, and fairness, amidst the growing concerns over security, privacy, and bias.
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM (http://arxiv.org/pdf/2405.05610v1.pdf) - The CoA method uncovers LLMs' security vulnerabilities by engaging them in semantically relevant, multi-turn dialogues to elicit harmful content.
- Locally Differentially Private In-Context Learning (http://arxiv.org/pdf/2405.04032v2.pdf) - LDP-ICL enhances privacy protections effectively in in-context learning for LLMs, offering a promising approach to mitigating privacy risks without compromising utility.
- Trojans in Large Language Models of Code: A Critical Review through a Trigger-Based Taxonomy (http://arxiv.org/pdf/2405.02828v1.pdf) - The research highlights the complexity and evolving nature of trojan attacks in coding applications, pointing to a pressing need for advanced defense strategies against these sophisticated security threats.
- To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models (http://arxiv.org/pdf/2405.03097v1.pdf) - Innovative unlearning algorithms provide a more nuanced approach to reducing memorization in large language models, balancing privacy concerns with utility preservation.
- Assessing Adversarial Robustness of Large Language Models: An Empirical Study (http://arxiv.org/pdf/2405.02764v1.pdf) - Larger LLMs demonstrate improved robustness against adversarial attacks, while advanced fine-tuning techniques offer new methods to optimize efficiency and resistance.
- Who Wrote This? The Key to Zero-Shot LLM-Generated Text Detection Is GECScore (http://arxiv.org/pdf/2405.04286v1.pdf) - GECScore offers a breakthrough in zero-shot LLM-generated text detection with unmatched accuracy and robustness, leveraging grammatical error discrepancies without the need for LLM access or training data.
- BiasKG: Adversarial Knowledge Graphs to Induce Bias in Large Language Models (http://arxiv.org/pdf/2405.04756v1.pdf) - Adversarial methods like BiasKG can significantly induce bias in language models, highlighting the limitations of current mitigation strategies and the ongoing challenge of ensuring AI safety.
Strengthen Your Professional Network
In the ever-evolving landscape of cybersecurity, knowledge is not just powerβit's protection. If you've found value in the insights and analyses shared within this newsletter, consider this an opportunity to strengthen your network by sharing it with peers. Encourage them to subscribe for cutting-edge insights into generative AI.