
Last Week in GAI Security Research - 12/23/24
Highlights from Last Week
* ๐ฐ Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
* ๐ Trust Calibration in IDEs: Paving the Way for Widespread Adoption of AI Refactoring
* ๐ ๏ธ Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation
* ๐ค SpearBot: Leveraging Large Language Models in a