Resilient Cyber Newsletter #72
Cyber Threat Snapshot, Mastering Cyber Budgets, AI Security M&A, No AI Bubble, AI Agent Security Summit, 2025 State of Dependency Management & AI Coding Agents Under Attack
Welcome!
Welcome to issue #72 of the Resilient Cyber Newsletter. As we continue to move through the year, AI remains a dominant topic in discussions, not just from the cybersecurity angle, with vulnerabilities, risks, and the need for governance, but also from the perspective of economics, GDP, and market ramifications.
So, without further delay, let’s walk through the hot topics and resources of the week.
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 40,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
Doxxing and social engineering start with exposed PII
From executive impersonation to targeted phishing, we often hear humans are the weakest link in cybersecurity.
While I may not always agree with that, there’s no denying that limiting the attack surface of publicly available personal data is part of a sound defense-in-depth strategy.
DeleteMe helps make that possible by removing employee PII from hundreds of high-risk data sources. Security teams can use it to protect executives, high-risk staff, and entire workforces from identity-based threats.
They also offer individual plans, so you can take control of your own privacy. Use my code resilientcyber for 20% off personal coverage.
Cyber Leadership & Market Dynamics
Cyber Threat Snapshot
The U.S. House Committee on Homeland Security (HCHS) recently provided a snapshot of 2025 cyber attacks and malicious activities.
Nation-state cyber attacks are up BIG, including a 150% increase from the PRC, with a surge of 300% in targeted attacks against financial services, manufacturing and industrial sectors
The average cost of a data breach in the U.S. reached $10 MILLION, which is 2x the global average
One in six data breaches in 2025 involved attacks driven by AI
70% of attacks in 2024 were aimed at critical infrastructure
This and much more in the snapshot, which is worth a quick read.
Resilient Cyber w/ Ross Young - Mastering the Cybersecurity Budget
In this episode, I sat down with a friend and ex-CIA Officer turned Cybersecurity leader, Ross Young over at CISO Tradecraft.
We unpacked the topic of mastering the cybersecurity budget. This includes examining whether most cyber budgets are wasted, determining where and how to make investments, justifying spending, and more.
Don’t miss this chance to delve into an often-overlooked subject that many Cybersecurity leaders struggle with.
Ross and I touched on a lot of great aspects of cybersecurity budgets, including:
Why the topic of cybersecurity budgets is often neglected among the hype around AI, tech and cybersecurity tools.
A provocative piece Ross recently wrote, laying out why and how most cybersecurity budgets are wasted, and what leaders can do to actually ensure return on security investment (ROSI).
How most CISO’s fail to look at material risks, relevant threats and measure progress when it comes to cybersecurity budgets and spending, and steps they can take to fix it.
The topic of point products or best of breed vs. platforms and where security leaders can drive both efficiencies and reduced risk, as well as tap into best of breed products when appropriate.
How to communicate about cybersecurity budgets and spending with non-cyber peers such as CFO’s and the board.
The risk of cybersecurity tool sprawl both from a financial and budgetary perspective as well as from a cyber risk perspective and how to rationalize a cyber portfolio.
Ross’s upcoming book “Cybersecurity’s Dirty Secret: Why Most Budgets Go to Waste”.
Ross also recently launched a virtual course titled “Master the Budget Game in Cybersecurity”. It includes 8 hours of CPE’s, 30 bite-sized modules, and downloadable templates. The course is currently 50% OFF, so I recommend checking it out now!
The AI Security M&A Spree Continues
2025 has been a year marked by big headline AI Security acquisitions, and that trend doesn’t seem to be over yet. The latest example is ZScaler’s acquisition of SPLX.
The play seems to be aimed at enhancing Zscaler’s Zero Trust Exchange, focusing on runtime AI guardrails, proactive AI asset discovery and automated red teaming. This of course bolsters Zscaler’s AI security capabilities while also giving SPLX access to Zscaler’s massive customer base.
Zscaler is one of the cyber industry giants that dominate the ecosystem and have an excellent leader in Jay Chaudhry. For a deeper look at Zscaler, both their history and their plans for the future, I recommend checking out the Inside the Network interview with Zscaler’s CEO Jay Chaudhry.
A New Generation of Sequoia Stewards
The Sequoia team took to the Internet this week to share a letter from Roelof Botha, where he announced Alfred Lin and Pat Grady will be leading the firm moving forward. Sequoia of course is one of the most dominant and well known VC firms in the ecosystem, with an incredible list of investments and teams they’ve built to their credit.
AI Market Bubble? Perhaps Not
Coatue recently released a public markets report, with much of it focusing on the AI bubble debate. They point out that AI continues to drive markets higher:
While they point out that the top 10 companies in the U.S. do represent an outsized portion of GDP at 77%, they also point out that they are profitable, international and diversified in their focus areas. You can see how during the dot-com era the top 10 companies were only 34% of GDP, so we’re seeing a massive increase in the impact of these companies on the entire U.S. economy.
Their report also points out the rapid adoption curve of AI compared to prior waves such as the Internet or PCs:
They highlight that valuations are more practical than they were during the 90’s and dot-com era as well:
The debate about whether or not AI is a bubble will continue on for sometime, until the unprecedented growth and adoption shows it has staying power, or doesn’t, and some will say “I told you so!”.
One thing is clear, the U.S. economy is reliant on the former, as the latter would be devastating.
AI
OpenAI Introduces Aardvark - An Agentic Security Researcher
Many of us in the community have been very excited about the potential of AI and Agents to systemically improve cybersecurity. I recently shared a Foreign Affairs piece from leader Jen Easterly where she echoed this sentiment and the potential of AI to address longstanding challenges, such as vulnerabilities in open source for example.
That’s why it was awesome to see OpenAI launch “Aardvark”, which they’ve dubbed as an Agentic Security Researcher, which can look for vulnerabilities in source code, and even propose fixes at-scale.
It involves a multi-stage pipeline where it conducts analysis, scans commits, validates potential vulnerability findings and then integrates with OpenAI Codex to generate patches to fix the identified and validated vulnerabilities.
From Prompt Injection to Promptware: Evolution of Attacks Against LLM Applications
I came across this talk from Ben Nassi this week, which is from Zenity’s recent Agentic AI summit event and found it excellent. Ben breaks down the rise of prompt injection against AI and LLMs, to promptware, which is the combination of prompt style attacks and malware.
This is a great conversation, and the rest of the events lineup is worth queuing up as well.
AI Agent Security Summit - October 2025
Speaking of Zenity’s summit, the entire playlist is worth checking out, including talks about vulnerabilities in AI agents, the AI vulnerability scoring system (AIVSS), agents as insider threats and more.
Zenity has been a team who has really impressed me with their research and I recently had their founder on my Resilient Cyber Show, which can be found below:
When AI Agents Go Rogue: Agent Session Smuggling Attack in A2A Systems
I previously covered the introduction of the Agent to Agent (A2A) protocol into the ecosystem early this year, riding the wave of excitement around Agentic AI, similar to the Model Context Protocol (MCP) before it.
Palo Alto’s Unit 42 recently published some interesting research looking at agent session smuggling attacks in A2A systems.
Unit 42 described it as a new attack vector tied to the stateful nature of cross-agent communication, where interactions are remembered to have ongoing context. There’s a malicious remote agent that misuses an ongoing session to inject additional instructions between a legitimate client request and the servers response with hiding instructions which can lead to context poisoning, data exfiltration or unauthorized tool execution on the client agent.
Vibecoding x Cybersecurity: Survival Guide by the Expert Who Fixes Your Code After You
My article “Vibecoding Conundrums” recently got tagged in a piece from
and , which I found to be a good article on vibe coding securely. They walk through common pitfalls, such as leaking credentials via prompts to third parties, creating overly permissive prototypes, and unverified dependencies.The blog discusses how to vibe code securely, and mitigate risk while still leveraging the benefits of AI coding tools.
A Practical Guide for Securely Using Third-Party MCP Servers
Excitement and discussions around MCP have dominated 2025. Since the Model Context Protocol (MCP) was introduced by Anthropic, vendors and individuals alike are making MCP servers available for use everywhere we look.
That said, we’ve seen novel attack vectors raised by folks such as Idan Habler, PhD Vineeth Sai Narajala, and others, intentionally malicious MCP servers in the wild found by teams such as Koi and folks such as Christian Posta raise concerns about MCP and IAM.
Astrix Security and Endor Labs have published excellent research on the MCP landscape and the security implications. Now, the OWASP GenAI Security Project has just published an excellent and concise practical guide for securing the use of third-party MCP servers.
This includes the current vulnerability landscape, with examples such as tool poisoning, rug pulls, prompt injections, and memory poisoning.It also covers client security and server discovery, as well as a breakdown of the authentication and authorization activities of MCP.
There’s a comprehensive look at automated scanners, such as MCP-Scan from Invariant Labs and others, as well as open source management and sandboxed execution. It concludes by examining governance, including workflows, roles, and responsibilities, as well as the need to establish a trusted MCP registry.
Needless to say, it covers a great deal of ground in only 16 pages and is an excellent resource for the community as MCP adoption and implementation continue to grow.
AppSec
2025 State of Dependency Management Report- AI Coding Agents and Software Supply Chain Risk
Endor Labs just dropped the 2025 State of Dependency Management and, it’s a real doozy. The team took a comprehensive look at both the state of AppSec and the role of AI on the modern SDLC, along with the rise of MCP and its potential as both an asset and a liability. This included:
10,000 MCP repositories and hundreds of AI agent prompts being tested 🤯
They found 75% of MCP servers are built by individuals, 41% lack any licensing information, and 82% use sensitive APIs that require careful security controls
AI Coding Agents are transforming the SDLC, but also the attack landscape. This includes 49% of imported dependencies having known vulnerabilities, 34% of the dependencies being phantom hallucinations and only 20% of them are secure without additional tools.
They also found equipping AI agents with security tools and into developer workflows can help improve safe dependency usage from 20% -> 57% 📈
The report is full of excellent insights on how AI is impacting the AppSec ecosystem, and I’ll be publishing a comprehensive deep dive of the report soon - so keep an eye out for that!
2025 SANS CloudSecNext Summit Playlist
SANS recently published their 2025 CloudSecNext Summit playlist, which includes an excellent collection of talks on everything about cloud security, from IAM, SecOps, GRC and more.
This includes a talk from my friend and former teammate in Dakota Riley titled “Compromising Pipelines with Evil Terraform Providers”.
The State of Product Security for the AI Era - 2026
The Cycode team recently published an interesting report with insights from over 400 CISOs and security leaders on the impact of AI, including securing AI-generated code, shadow AI, budgets, productivity and more.
They found near ubiquitous adoption of AI when it comes to coding assistants:
They also found AI-generated code is the #1 blindspot for AppSec and Product Security teams, followed closely by the use of AI tools and software supply chain risks.
AI security is driving budget increases, with 100% of those surveyed stating they expect an increased budget for AI security in 2026.
The report is full of other great insights in terms of AI’s intersection with AppSec, so I recommend giving it a full read and I will likely be doing a deep dive of the key takeaways here soon!
PromptJacking: The Critical RCEs in Claude Desktop That Turn Questions Into Exploits
I’m beginning to feel like I’m sharing research and findings from the Koi team damn near weekly at this point. That said, they continue to find critical findings that are impacting the community and tied to leading AI coding tools, as well as MCP and developer extensions.
The latest example are three official Claude extensions with over 350,000 downloads which are all vulnerable to remote code execution (RCE). The extensions are Chrome, iMessage and Apple Notes and were published by Anthropic themselves and are available on Claude Desktop’s extension marketplace.
All three have now been fixed by Anthropic, and were rated critical via CVSS, but nonetheless, the incident goes to show that even leading providers can inadvertently expose organizations with malicious extensions in AI coding tools.
Koi’s blog walks through how compromised or malicious web pages that Claude interacts with as part of fetching data to respond to prompts could lead to triggering a vulnerability and executing malicious code.
Deep Dive: Cursor Code Injection Runtime Attacks
Continuing the trend against AI coding agents, the team at Knostic, another AI security firm provided an excellent deep dive on code injection runtime attacks against Cursor. Cursor of course is an AI coding agent with widespread adoption and use.
This involves a malicious extension and can lead to taking over the IDE and developer workstation. This highlights the continued vulnerabilities and attack vector of AI coding agents, which can compromise developer endpoints and impact organizations, turning the “productivity” gains of AI coding into threat vectors.
Their blog walks through how AI coding agents often have insecure architectures, expand the attack surface and perpetuate old classes of vulnerabilities.




















Thank you