Resilient Cyber Newsletter #73
Cybersecurity Incentives, AI Wildfire is Coming, Sins of Security Vendors, State of AI Report, Agentic AI Security, New OWASP Top 10 & the Evolution of AppSec
Welcome
Welcome to issue #73 of the Resilient Cyber Newsletter.
2025 continues to be an interesting and exciting time, with frothy markets being disproportionately driven by AI, new and novel risks from AI agents, longstanding challenges with security vendors, and AppSec evolving from a new OWASP Top 10 to a revolution centered on moving from shifting left to runtime.
I'll unpack everything this week, so let’s get started!
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 40,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
NHIs + AI: Secure the Next Era of Identities
AI is rewriting the rules of identity and expanding your attack surface. Non-human identities (NHIs) like service accounts & tokens now vastly outnumber humans, and attackers are exploiting the blind spots between them. Join Permiso for a live session on how to close those gaps & secure every identity—human, NHI, and AI—across your environment.
Why it matters:
• 84% of breaches start with identity compromise—but most tools only see half the picture.
• NHI & AI identities operate 24/7, making lateral movement nearly invisible to traditional controls.
• Attackers are hijacking AI agents to exfiltrate data and impersonate trusted systems.
This webinar shows how to unify visibility, correlate human and machine identity behaviors, and apply threat-informed detection across the full identity chain.
Cyber Leadership & Market Dynamics
Two Cyber Practitioners Accused of Hacking and Extorting U.S. Companies
In an odd story that got picked up by major news outlets such as CNN, two former employees of cyber firms have now been indicted and accused of participating in conspiracies to hack and extort U.S. firms.
They were accused of deploying a popular ransomware against a medical device firm in FL, a pharmaceutical firm in MD, and a drone company in VA, among others. They tried to demand a $10 million payment, and even received $1.27 million in payments for their ransom activities.
This is a case where practitioners may be tempted by the financial success of malicious actors using ransomware and decide to try it themselves. Unfortunately, they also apparently missed the reality that many end up getting caught and charged, especially if you’re here in the U.S. and not shielded internationally from extradition.
Not Getting Incentives Right Can Kill a Security Initiative or Security Startup
I find myself discussing incentives (or the lack thereof) in cybersecurity and how they drive behavior and the outcomes we observe in the cyber ecosystem. That is why I was happy to see my friend
recently publish a piece on the topic (while also giving me a shout-out).Ross penned an excellent piece highlighting how incentives drive behavior, including for organizations and developers. Developers are incentivized and promoted based on factors such as velocity and productivity, rather than security. Therefore, it shouldn’t be surprising when they don’t prioritize security.
He is spot on and it is something I have spoken a lot about, in the sense of how we get frustrated when people don’t care about security as much as we do, and we shake our fist in anger all while ignoring the fact that incentives drive behavior and not only are developers not incentivized to prioritize security, but neither are organizations, as you look at the lack of impact to share prices, profits etc. when it comes to security incidents. That’s why efforts such as CISA’s voluntary, toothless Secure-by-Design pledge are mostly virtue signaling efforts that lack any real enforcement mechanisms.
Speed to market and profits trump security, and honestly, security isn’t and never will be the organization's top priority, or their most pressing risk, as businesses grapple with a long list of competing risks, including constraints around capital, shareholder expectations, profits, competitive market share among peers, and more.
I discussed this in depth in prior articles, such as:
Ross provided a thought-provoking quadrant to go along with his article as well:
Cyber firm F5 anticipates revenue hit after attack
Speaking of incentives, one of the primary concerns for corporations is revenue and profits. That is why it was interesting to see Cybersecurity firm F5, which has been a victim of a fairly damning attack recently, including the exposure of their source code, openly admit that they anticipate a revenue hit due to the incident.
In a message to shareholders, the firm discussed how it’s anticipating the revenue hit. While it remains to be seen how much of a hit and the long-term impact it has, if any, it is sometimes refreshing to see some level of market reaction to cybersecurity incidents, even if they don’t last long, as the market often has a short memory.
The AI Wildfire Is Coming. It’s going to Be Very Painful and Incredibly Healthy.
While this is an “AI” piece, it is heavily focused on the market dynamics involved. It is genuinely one of the best and most comprehensive pieces I’ve come across yet on the whole AI bubble debate.
The author
puts together a fantastic article comparing the market to a wildfire, where tech cycles have peaks and valleys, including the need for fires to clear brush, redistribute talent, and leave infrastructure for following founders and players in the market.Dion walks through previous super cycles, including the burn of 2,000 and key players such as Amazon, eBay, Microsoft, and others; the growth of the Internet; the burn of 2008 and its key players; and the current AI cycle.
Dion makes the case that what is unique to this cycle is unlike previous cycles, which had an abundance of small, overvalued startups; we have a market where the concentration is on the “tallest trees”, the most prominent players in the sense of Nvidia, OpenAI, and Microsoft, and their massive expenditures, investments, cross-investments, and more. This is something I have discussed in prior newsletter issues, where we discussed the outsized role these AI leaders are playing in the overall U.S. market growth and GDP, and the potential risks that they pose to the economy if there is a change of direction.
Below are some great charts/diagrams from the article, and I really recommend reading it in its entirety.
Busy Work Generators
That’s how
describes most cybersecurity products, and he’s right. Most security tools are great at creating new problems and work for the organization. Alerts, notifications, findings - toil.Finding problems? We’re great at that.
Fixing them? Not so much.
He frames it as a market for “lemonade”, where downstream vendors and products are created to address problems created by other products in the stack and ecosystem. It creates a self-licking ice cream cone scenario, with a demand for infinite spend, headcount growth, and tool expansion.
We continue to beat developers and the business aside the head with alerts, findings, and notifications. Meanwhile, attackers continue to thrive on low-hanging fruit.
Dormant and overly permissive accounts, exponentially growing vulnerability backlogs, and most ironically, insecure security product shelfware that expands attack surfaces.
The Sins of Security Vendor Research
I’m a big fan of security vendor research. I routinely read, cite, and share vendor reports on the attack landscape, threats, and trends. However, like many others, I’m not naive to the fact that these reports and research often have a bias or an angle to them as well.
Which is why I really enjoyed Rami McCarthy’s piece titled “The Sins of Security Vendor Research”, where he discussed some of these behaviors that influence security vendor research and the unintended consequences it can have.
He cites the four sins as:
Fear, Uncertainty, and Doubt
False Novelty
Correlation, not Causation (statistical sins)
Selling out to Marketing
Rami discusses the negative impacts and outcomes that each of these sins has, and how they can alienate the very audience the researchers and vendors set out to share their research with or influence in the first place.
I see this all the time, with teams producing reports that contain conclusions and findings justifying their products. On the one hand, it makes sense, given that it focuses on the problem space they’re most passionate about. On the other hand, it is also a fine line to ensure the reporting and research have merit and aren’t just leading customers to buy their products.
AI
State of AI Report
I recently came across this amazingly comprehensive breakdown of the state of AI from Nathan Benaich at Air Street Capital. It looks at advances and key themes of AI throughout 2025, including Research, Industry, Politics, Safety, and Predictions. This includes advances in models, market growth among key players, the geopolitics between China and the U.S. in the AI race, as well as key aspects related to Cybersecurity.
I strongly recommend checking out the full presentation. Nathan has also put together a video walking through the presentation and key takeaways for those who prefer a video breakdown.
Claude Pirate: Abusing Anthropic’s File API for Data Exfiltration
We continue to see the expansion of AI tools such as Cursor, Windsurf, and Claude among others, and, every time the capabilities expand, so do potential attack scenarios. This blog highlights how you can use Claude’s new ability to perform network requests to potentially exfiltrate data that users have access to.
LLM Memory Systems Explained
One of the key topics when it comes to LLMs that discussed in the industry is memory, or lack thereof for LLMs. This blog was an excellent primer on the topic that I found helpful for better understanding LLMs and memory.
It is dubbed as a “introductory guide to how LLMs handle ‘memory’, from context windows to retrieval systems and everything in between”.
The key theme is that LLMs do not remember anything, they are stateless, with each inference being independent and using prior messages as inputs to form a “context window”.
The article does an excellent job of not only explaining LLMs and memory but the “three competing constraints” this lack of memory creates:
The piece goes on to discuss techniques we use to get around these limitations including Summarization, Fast Tracking and Retrieval Systems, each of which comes with their own benefits and drawbacks. Some of these aren’t just performance related but also have security and privacy implications as well, and it is worth noting some providers may be using these techniques with or without your consent or awareness.
Agentic AI Identity 101 Cheat Sheet
I keep running into teams looking to use AI Agents that can read data, call APIs, and even make changes across core business infrastructure.
It’s easy to forget that every one of those agents is associated with an identity to manage – with privileges, context, and audit requirements like anything else in your environment.
Resilient Cyber’s partner Aembit put together a nice “cheat sheet” that breaks down the identity and access concepts worth understanding now, before autonomous systems become a bigger part of daily operations.
What I particularly like is how it discusses autonomy as a spectrum while covering the core concepts of Agentic AI identity and the most critical risks that security teams and leaders should be aware of.
This is an excellent resource as a primer on a topic that will soon be fundamental to the future of identity and access management as Agents and their associated identities become commonplace.
Resilient Cyber w/ Kamal Shah - The State of AI in SecOps
In this episode of Resilient Cyber, I sit down with Kamal Shah, Cofounder and CEO at Prophet Security, to discuss the State of AI in SecOps.
There continues to be a tremendous amount of excitement and investment in the industry around AI and cybersecurity, with Security Operations (SecOps) arguably seeing the most investment among the various cybersecurity categories.
Kamal and I will walk through the actual state of AI in SecOps, how AI is impacting the future of the SOC, what hype vs. reality is, and much more.
Prefer to listen?
Please be sure to leaving a rating and review, it helps a ton.
Agentic AI Security - Threats, Defenses, Evaluation and Open Challenges
As we all know, we’re in the “decade” of Agents (queue Karpathy), with excitement around a near infinite set of use cases and potential. That said, as noted by the OWASP GenAI Security Project and others, Agentic AI also poses numerous threats and security challenges.
This is an excellent paper on the topic, highlighting key vulnerabilities that have already been discovered (cc: Zenity, Astrix Security Ken Huang, Idan Habler, PhD, Vineeth Sai Narajala, Gadi Evron, etc.). It also provides a comprehensive taxonomy of security threats to Agentic AI, including prompt injection (both direct and indirect), jailbreaks, tool abuse, and other related threats.
It covers key threats to emerging agentic protocols such as MCP and A2A. It then outlines fundamental defenses and security controls, including prompt-injection-resistant designs, runtime behavior monitoring, policy filtering, and governance.
This is a well-written paper that is well-researched, thoroughly cited, and accompanied by intuitive diagrams that help clarify key concepts.
Below is their comprehensive taxonomy of security threats to agentic AI:
It also has useful diagrams to demonstrate direct and indirect prompt injection:
The paper discusses relevant threats, mitigations and open challenges at depth and is an excellent publication to read.
AppSec
The OWASP Top 10 Gets Modernized
This week, at the event in D.C, OWASP® Foundation announced their 2025 OWASP Top 10. The update reflects the most significant update since 2021. This includes some categories being revised, rankings changing and the most vast AppSec dataset to date. I breakdown some of them major changes in a recent blog, including.
The fact that Broken Access Control still reigns supreme, which aligns with broader trends aren’t credential abuse.
Some are hopeful that AI can help here, with the rise of AI SAST, and the ability to reason about complex business logic, authentication bypasses and data flow issues. I anticipate this will become even more complicated with the excitement around Agentic AI and complexities with IAM.Security Misconfigurations takes #2, dominating cloud environments and infrastructure and being driven by factors such as secret leakage.
Further adding to the challenge is that many products lack guidance such as DISA STIG’s/CIS Benchmarks and often configurations are contextual based on environments and business use cases.
Software supply chain security gets its due recognition, with Software Supply Chain Failures rising to #3.
This is now evolving beyond just bad components, to include underlying infrastructure and build systems.
AI is going to further exacerbate this with factors such as LLM package hallucinations and now vulnerabilities and attack vectors associated with AI coding assistants such as Cursor, Claude and Windsurf, as demonstrated by folks such as Gadi Evron, Idan Habler, PhD and Ken Huang
The Evolution of AppSec - From Shifting Left to Rallying on Runtime
If you’ve been paying attention in AppSec over the last several years, we have seen the community get frustrated with the way we have tried to “shift left”, coupled with an emphasis on runtime visibility, creating the new category of Application Detection & Response (ADR).
I’ve previously written about ADR, and in my latest Resilient Cyber piece, I go even deeper on the topic, leveraging insights from Contrast Security‘s Software Under Siege Report.
I also share a recent conversation I had with Jeff Williams and Naomi Buckwalter on all things ADR, Shift Left, and AI’s intersection with AppSec.
- We’ve come to grips with the reality that while we should still embed security early in the SDLC, we can’t prevent all vulnerabilities from reaching production.
- However, traditional tools such as EDR and WAF miss critical insights at the application-layer, hindering SecOps and SOC teams’ detection and response capabilities to modern threats.
- All of this is happening at a time when application vulnerability exploits are one of the dominant attack vectors, per sources such as Verizon’s DBIR and M-Trends
I outline the constant state application-layer probes and attacks that teams are facing, as well as the growth of vulnerabilities outpacing their remediation capacity.
I explain why ADR addresses these gaps and also builds on some of the challenges of shift left to fully cover the SDLC process with security, from code to runtime visibility and context.
GitHub Copilot With Major Announcements
GitHub Copilot recently announced some major capability releases, including the ability to assign code scanning alerts to Copilot for automated fixes, as well as Copilot coding agents now automatically validating code security and quality.
Moving forward, the goal is for new code generated by Copilot’s coding agent to automatically be analyzed by GitHub’s security and quality validation tools, such as CodeQL and secret scanning, and analyzing risks in dependencies against the GitHub Advisory Database. They also mentioned that these new features do not require a GitHub Advanced Security (GHAS) license.
I found these announcements noteworthy due to GitHubs dominant role in the development ecosystem, so them natively integrating these security capabilities has the potential to make a systemic impact on vulnerabilities and risks.
False Negatives in SAST: Hidden Risks Behind the Noise
We hear a ton about false positives in SAST tools, and rightfully so, as it wastes a ton of time among developers and AppSec practitioners. But, what about false negatives and the potential real risks organizations overlook as a result of them?
This piece from Endor Labs highlights the latest research from academia and industry alike, demonstrating how SAST tools miss between 47% to 80% of vulnerabilities in real-world tests, and even combining multiple SAST tools only reduced false negative rates 30% to 69%, which also creating even more false positives.
The article highlights an alarming stat:
“roughly one-fifth of vulnerabilities (22%) go completely undetected by SAST tools”.
The blog goes on to highlight the limitations that cause SAST tools to miss findings, such a logic and context problems and the uncomfortable tradeoffs organizations have to make when using SAST.
Many are excited about the combination of AI and SAST into AI SAST tools which can address complex business logic and other shortcomings of traditional SAST.
“Fund Us or Stop Sending Bugs”
A bit of an uproar erupted on X recently involving an open source framework in FFmpeg and Google and others. It revolved around open source vulnerabilities, security disclosures and the responsibilities of large tech firms when it comes to open source.
The debate involves Google’s AI agent finding bugs in FFmpeg, and the bugs being obscure, and the open source project maintainers emphasizing that they are volunteers and asking that Google either fund the remediations, or provide patches themselves with bug reports.
This is a really fascinating case where the longstanding voluntary and unfunded nature of open source is running into the use of AI to find bugs which exacerbates demands on the voluntary maintainers.



















