Scaling Yourself with the AI Digest
Every week for the past 7 months, I've found myself executing a python script I call "deepthought" to summarize research papers that talk about applying generative AI to security problems. This process is what powers the "Last Week in GAI" digests I send out every Monday and it's processed hundreds of papers, usually 15-20 per week, across a variety of topics including inference-based attacks against models, surfacing and resolving code vulnerabilities and how graphs can be used to model threat intelligence. In this post, I wanted to share how I leverage this material every week and how it informs my strategic thinking.
Using AI to Scale
The most obvious benefit of the security digest, and why I assume many subscribe to this newsletter, is scaling my understanding of recent research that was published in the past week. While central authorities exist, they do not enable ways for me to follow topics or execute routine searches. My code to automate this saves me time and allows me to scale. Furthermore, if I needed to read each of the 20 papers every week, it would be difficult to make significant progress. AI being able to pull out the top 3 points is a great way to whittle down a bunch of papers into the select few I will read during the week.
What I have done above can be applied to any type of problem, and if you're eager to stay updated on AI, you should be applying the approach. In fact, I have a second script that processes "agentic" papers to help me understand how that space is evolving alongside security. Using AI to scale doesn't always require code and could be as simple as a prompting existing models to collaborate on a project, learn a new topic or providing an initial draft from basic ideas.
Growing Network and Toolset
Every research paper has a set of authors listed at the top along with citations and acknowledgements. In some cases, code and experiment details are shared to the public. As we all quest to find the best agent architecture, or set of fine-tuning use cases or benchmarks to measure our progress, we benefit from sharing. It's great to be able to reach out directly to authors of a paper I find interesting and discuss their work. It's even cooler to look at a code implementation where I can peer into their architecture choices or even leverage their capabilities within my own code. As an example, prior to fully implementing my digest code, I had faced challenges fitting many of the research papers into the context windows of hosted language models. Eventually, I stumbled on the LLMLingua project that used SLMs to compress prompts while retaining the core of the original material. I was able to implement this within an hour to not only improve the outputs of my code, but further scale my summarization work.
Research as Inspiration
If you truly look at research papers published week over week, you'll find there's a great deal of overlap in the topics and general thinking. This is especially true when it comes to the topic of "jailbreaking" or attacking LLMs. What keeps me invested in reviewing the papers each week is that occasionally I find novel ideas or ways of framing a problem I didn't think about or an experiment with code I can reference and review. For example, I still think about the "Synthetic Cancer - Augmenting Worms with LLMs" paper from early July and how it gave a formal demonstration of how self-replication could work in an attack with these models. That paper didn't teach me anything new, but gave me a data point to use to emphasize the need for different defensive methods and kicked off broader research in attacking agentic systems.
When I read the research papers, I of course pay attention to the results, but I also look at how the experiment was framed, evaluated and tested. I sometimes find there's a disconnect between academics talking about security and practitioners who do it on a daily basis. I've come to appreciate this unintended ignorance as it proposes ideas I wouldn't have bothered with and challenges my conventional thinking. Apart from ideas, I also find immense value in meta papers that aggregate all published research on a topic and enumerate each subject. These are incredibly valuable resources to form foundational thinking.
Connecting Themes in Research to Daily Interactions
By exposing my mind to a collection of research papers and ideas at the start of each week, I find myself seeing the themes of those papers throughout my interactions with others at work and informing my own experimentation. It's feels like a superpower to be discussing a specific AI-security use case and provide relevant and timely research that offers some data points to support or disprove our hypothesis. More recently, I leverage this ability and copy the "raw" digest directly into a local mail where I call-out people across the company per paper I think could be helpful. The feedback of doing this simple gesture has been overwhelmingly positive.
In short, the digest is not just a digest—it's a tool I use to expand my understanding, foster collaboration, and integrate the latest research directly into my strategic discussions and daily operations. Now that you know how I use it, I am curious how you apply the content! Feel free to reach out directly at brandon[@]9bplus[.]com.