Resilient Cyber Newsletter #68
Security Leadership Masterclass, AI Security M&A, Sonnet 4.5 Takes on Cyber, NIST Evaluates DeepSeek, Security Risks of OpenAI’s AgentKit & 2025 CVE Trends
Welcome!
Welcome to Issue #68 of the Resilient Cyber Newsletter.
I’m slightly late in getting the newsletter out this week, and honestly I’m okay with it, as in addition to aggregating and analyzing resources for this weeks issue, I got a chance to take my oldest son to a NY Yankees game, as well as participate in Aquia’s 2026 Sales Kickoff with my team, both of which were awesome!
While the Yankees ended up being eliminated yesterday, we got to go to Game 3, their first home game against the Toronto Blue Jays, and the Yankees pulled off a massive comeback and my son said it was the best baseball game he’s ever been to, which made me happy.
Anyways, enough about me.
This week I have a lot of great resources and analysis to share, including a masterclass in cybersecurity leadership from none other than Phil Venables, insights into AI Security M&A, deep dives into the security and risks of models such as DeepSeek, the impact of AI-generated code on AppSec and much more - so let’s go down to business.
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 40,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
Join the Ultimate Hands-On Cybersecurity Lab Experience
Step into Mole Wars, Cyberhaven’s interactive live training lab where you go beyond theory and into action. This isn’t another webinar; it’s a fully immersive simulation that drops you inside a live environment to hunt, investigate, and stop insider threats in real time.
Work through real-world data exfiltration scenarios, identify risky shadow AI use, and see how modern data protection technology reveals what legacy DLP can’t.
It’s part training, part mission, designed for cybersecurity professionals who learn best by doing.
Ready to take control of your own defense?
Cyber Leadership & Market Dynamics
Security Leadership Master Class 1: Leveling Up Your Leadership
If you’re like most CISOs and security leaders, you’re often looking to improve your leadership skills and be a more effective security executive. Well, there are few better equipped in our industry to help folks do that than Phil Venables, who has a massively impressive career spanning roles from finance to cloud and more, all while bringing excellent insights via his blog, which I’m a longtime follower of as well.
Phil recently kicked off a 7-part series focused on security leadership, and he just shared the first part of that 7-part series.
Security Leadership Master Class 1: Leveling up your leadership
Security Leadership Master Class 2: Dealing with the board and other executives
Security Leadership Master Class 3: Building a security program
Security Leadership Master Class 4: Enhancing or refreshing a security program
Security Leadership Master Class 5: Getting hired and doing hiring
Security Leadership Master Class 6: When disaster strikes
Security Leadership Master Class 7: Contrarian takes
Phil lays out some essential attributes in this first piece, including:
The need to act like a business executive, not an IT manager
Developing and communicating a clear, generative strategy
Mastering business-oriented communication and influence
And many more key recommendations that will serve security executives well. I’ll be following along with this series, and I recommend others do as well. I actually had the chance to interview Phil about security leadership and his journey from security to venture capital earlier this year, and he shared a wealth of wisdom during that conversation for those who missed it.
10 Lessons Learned from Scaling
In another capacity when it comes to leadership, Aleksandr Yampolskiy, CEO and Founder of Security Scorecard recently wrote a blot about 10 things he learned while scaling SecurityScorecard from $0M in 2014 to 600+ employees, 70% of fortune 100 customers and hundreds of millions in ARR revenue today.
I won’t go through the list item by item but it is full of hard earned advice from someone who has went from zero to an industry leader in cybersecurity. A lot of the recommendations come with nuanced perspectives too, given there is no one sized fits all and each scenario requires some level of discretion.
It has key insights related to raising funding, building teams and delivering on customer outcomes.
Attend CyberCEO Summit: For Cyber Builders & Investors
The CyberCEO Summit, held Dec 10 in Austin TX, is where cybersecurity founders, CEOs, and investors come together to learn, connect, and shape the future of our industry.
Hear directly security innovators like Ross Haleliuk and Nick Muy on scaling, funding, and navigating today’s market and GTM challenges. Meet 1x1 with security industry analysts, investors, influencers, and experts.
This event is your chance to network with peers, gain actionable insights & feedback, and build relationships that can accelerate your company’s growth. Don’t miss the conversations that could define your next move or your company’s next big opportunity. CyberCEO Summit culminates in a VIP Founder, CEO, and Investor Dinner. Reserve your spot today.
OpenAI Sneezes and Software Firms Catch a Cold
One of the stories dominating headlines this week has been OpenAI’s announcements at their DevDay event, particularly when it comes to building agents and agentic workflows.
Other examples including custom AI apps OpenAI has built for things such as document management, sales and more, causing the stock price of some firms to dip, including DocuSign, HubSpot, and albeit not as much, Salesforce.
penned an excellent piece discussing OpenAI’s push to become the “operating system of everything”, with their ability to allow users to directly invoke ChatGPT directly in applications, as well as their Apps SDK. Saanya discusses the broader ecosystem and economic ramifications of this announcement and the potential implications for some of OpenAI’s competitors and partners alike.(Source: Credit to Saanya on the image above - from her article I linked to)
If They Can’t Retrofit AI, They Will Acquire It
One thing that has dominated headlines and trends in 2025 is large industry vendors acquiring AI-native startups in attempts to accelerate their own AI capabilities or avoid needing to the do the extensive R&D efforts needed to build AI functionality into their platforms. It is often done to quickly snag up existing customer bases, talent and market momentum.
That said, it is difficult to retrofit AI, and that is a point Andrew Green made in a recent article of his.
Andrew discusses how many existing vendors try and retrofit AI and LLMs into their architectures manifesting in low quality “copilots” and chatbots, whereas the AI-native companies have architected with AI in mind from the onset.
Andrew discusses the trends of AI M&A in 2025 thus far, including companies such as AIM, Protect AI, Prompt Security and others, and cites a deep dive on the topic from
.I found the below diagram from
’s piece helpful, in framing AI products:Some are more security focused, while others are more focused on the underlying infrastructure. Much like some are more runtime focused, while others are focused on the development and planning of systems, or vulnerability management and posture for example.
Andrew even predicts some of the future acquisitions in the coming months/year in the various categories, such as VM & Posture, as seen below:
He was on target too, given Qwiet recently was acquired by Harness, perhaps a bit of a curveball as they aren’t strictly a VM player, but still shows that the potential targets were on point.
For a deeper dive on these topics and how to frame AI security products, I recommend checking out a conversation I did with James on this not too long ago titled “Analyzing the AI Security Market”
Resilient Cyber w/ Kenny Scott - Following the Future of FedRAMP
In this episode of Resilient Cyber, I sit down with Founder & CEO of Paramify, Kenny Scott, to unpack the evolution of the FedRAMP program, FedRAMP 20x, and discuss what the public sector cloud compliance looks like moving into the future.
Prefer to Listen?
Kenny and I dove into a lot of topics, including:
What FedRAMP is and why it matters
What FedRAMP 20x is and what longstanding challenges associated with FedRAMP and public sector cloud and compliance it is addressing
The various aspects of FedRAMP 20x, including its phased rollout
Changes via FedRAMP 20x when it comes to Key Security Indicators (KSI), and how they differ from “controls”
FedRAMP’s modern vulnerability management approach and how it changes from the way vulnerability was historically handled under FedRAMP
The importance of automated assessments, machine-readable artifacts, real Continuous Monitoring (ConMon) and more for effective GRC Engineering
The role of GRC platforms when it comes to modernizing GRC
What the implications of FedRAMP 20x are for other public sector compliance programs, such as DoD’s SWFT, SRG and RMF
AI
Building AI for Cyber Defenders
“We should not cede the cyber advantage derived from AI to attackers and criminals.” 🔥
I absolutely loved hearing this line, especially from one of the leading frontier model providers (and my personal favorite) in Anthropic
Anthropic recently published an excellent article detailing how they are intentionally building AI for cyber defenders and how they specifically evaluated their latest model, Claude Sonnet 4.5, in terms of cyber-specific benchmarks.
The findings? Pretty damn good!
On Cybench, which focuses on cyber CTF challenges, Claude Sonnet 4.5 succeeded on 76.5% of challenges, especially when given 10 attempts, which is DOUBLE the success rate in just six months over Sonnet 3.7
CyberGym, which evaluates an agent’s ability to find known vulnerabilities and new vulnerabilities. Sonnet 4.5 achieved a state-of-the-art score of 28.9% when limited to $2 of LLM API calls, and nearly 70% when allowed to spend up to $45, which is still relatively trivial for identifying vulnerabilities to remediate.
Sonnet 4.5 was also able to discover NEW vulnerabilities in 5% of cases in a single attempt, and when given 30 attempts, found NEW vulnerabilities in over 33% of projects.
What’s notable is that Anthropic is conducting these intentional improvements and research focused on enhancing cyber defenders, while also implementing safeguards to prevent the model’s use by attackers where possible, which is what we need from leading frontier model providers.
I don’t think people understand just how quickly AI is changing the landscape, including in cybersecurity, and not just in the technical sense, but also from an economic perspective, for both attackers and defenders.
I’ve been emphasizing how Cybersecurity needs to pivot from being a typical late adopter and laggard to being an early adopter and innovator with AI, to avoid a further asymmetric disadvantage for both developers and attackers.
Examples like these show how we can do that and I really think Anthropic and their models are starting to shine among a large portion of technical users.
Evaluation of DeepSeek AI Models
DeepSeek previously rattled the U.S AI ecosystem and even financial markets when its performance and, most importantly, cost efficiency were shared with the world.
That said, many questions remain about the DeepSeek AI models, including those related to security.
This new publication from the U.S. Center for AI Standards and Innovation (CAISI) out of National Institute of Standards and Technology (NIST) provides just that.
Not only did they show that DeepSeek’s performance lags behind the best U.S. reference models, but costs are now comparable on benchmark tasks. However, the DeepSeek Models are far more susceptible to agent hijacking than frontier U.S. models.
They are also more susceptible to jailbreaking than the leading U.S. models, and also tend to echo Chinese Communist Party (CCP narratives that are inaccurate or misleading.
That said, DeepSeek is driving the adoption of PRC models, especially since the release of DeepSeek R1, with downloads of DeepSeek models on model-sharing platforms increasing by 1,000% since January.
How Google Protects AI Training Data
A key part of the secure lifecycle for AI is protecting training data. For those unfamiliar, training data is the data used to train models, whether for general or specific purposes and can often include sensitive data as well as be a vector for compromising the integrity of the downstream model outputs.
Google recently published a comprehensive paper discussing how they safeguard training data, including accounting for AI-specific risks such as memorization, data and model lineage concerns and aligning with their Secure AI Framework (SAIF).
Cybersecurity and Agentic AI – A Race to Contain this Emerging Exposure
Something myself and some others, such as my friends as part of the OWASP Agentic Security Initiative (ASI) have been discussing a lot is the need for a paradigm shift within security and the broader industry when it comes to securing Agentic AI.
This piece from VC firm Inovia dives into the topic, discussing some of the risks of Agentic AI, such as privileged escalation and unauthorized actions, memory and context poisoning, goal manipulation and more.
The piece goes on to highlight notable incidents including those associated with Cursor, Replit, Microsoft Copilot and others. The incidents include malicious instructions that led to agents bypassing security guardrails, misinterpreted instructions leading to deleted data and even full remote code execution (RCE).
The authors argue we need foundational security controls and capabilities for AI agents, such as discovery and inventory, IAM, least permissive access control, and monitoring and governance.
They even lay out a (non-exhaustive) agentic security vendor landscape.
Some are legacy vendors, who are adding coverage and capabilities for agents, either organically or through M&A, others are emerging vendors who are tackling specific aspects of AI and agents, such as NHI, LLMs/Models and others are pure play, such as Zenity.
The authors close by discussing the importance of needing to build in security now during early agentic adoption versus bolting it on later, as well as the fact that the agentic workflows and integrations are only going to grow more complex and we need tools to manage that risk - all points I agree with and have written/spoken about extensively myself as well.
Analyzing The Security Risks of OpenAI’s AgentKit
Speaking of Agentic risks, as I mentioned in the cyber leadership and market dynamics section above, OpenAI announced their release of AgentKit, and while many have discussed the potential market implications for agent oriented vendors and products, there is also a major security angle here too.
Unsurprisingly, the amazing team at Zenity is on top of it with their research, doing a deep dive into AgentKit and the potential security concerns and considerations.
First, they lay out the three components of AgentiKit, which are Agent Builder, Connector Registry and ChatKit. Agent Builder is the visual aspect of building your agents, the connector is a curated set of MCP resources from OpenAI and others that they deem trusted and ChatKit allows integration for developers into products.
Zenity lays out potential risks such as sensitive data exposure/leaks, excessive agency and soft security boundaries in the form of guardrails. That said, these risks are on the builders using AgentKit, not related to the platform itself. Meaning, those using AgentKit to build agents and workflows need to understand what they’re building, how it can be compromised or exposure organizations to risks and more.
Think of it like building on the cloud. If you chose to leave data storage unencrypted and publicly accessible, or deployed insecure and poorly configured virtual machines that are exposed to the Internet, that’s not the CSPs fault, it’s yours. The same concept is at play here for consumers building with AgentKit.
As these agents are created, integrated and given access, all of that has implications, as well as the MCP servers the builders choose to integrate with or expose the agents to. These are the sort of threats and considerations we have discussed in publications, such as OWASP’s Agentic AI Threats and Mitigations publication.
AppSec
2025 CVE Stats
One of my favorite vulnerability researchers, Jerry Gamblin recently provided an update on the stats related to CVE’s YTD so far for 2025. In summary:
Total Number of CVEs: 35, 404
Average CVEs Per Day: 129.68
Average CVSS Score: 6.62
YoY Growth: 22.72% or +6,555 (28,849 CVEs in 2024)
What’s driving the growth? Another excellent researcher in Andrey Lukashenkov provided some insights there.
As seen above, key CVE’s such as Patchstack, VulDB, Linux, GitHub and VulnCheck have been major contributes of CVEs, which drives up figures overall.
Vibe Coding is the New Open Source - in the Worst Way Possible
We know we’re seeing widespread adoption of AI tooling and generated code, with much of the focus being on velocity, volume and productivity. Unfortunately, much of that boon is coming with security risks and downsides as well, such as implicit trust, lack of validation and inherent vulnerabilities due to the largely open source datasets the coding platforms and underlying models are trained on.
This piece from Wired draws parallels from Open Source to Vibe Coding and AI-generated code, which makes sense, given the intersection between the two. I have written extensively about this myself prior, in a pieces titled:
Fast and Flawed: Diving into Veracode’s 2025 GenAI Code Security Report
A Security Vibe Check: When vibe coding meets security - a look at OX’s VibeSec Platform
Much like how developers inherently trusted and relied on OSS previously, they are now doing the same with vibe coding and AI generated code (which is derived from OSS). This exacerbates challenges around software supply chain security such as insecure code, vulnerabilities, the amplification of potential OSS supply chain attacks, hallucinations and even further muddies the water around lineage.
The Wired piece cites a recent Checkmarx survey that showed CISO’s, Heads of Development and AppSec leaders stating that 60% of their code is written by AI now among 1/3 of those surveyed.
4x Velocity, 10x Vulnerabilities: AI Coding Assistants Are Shipping More Risks
AppSec company Apiiro recently put out some research on the state of AI coding assistants as well, pointing out that some leaders aren’t just encouraging the use of the tools but mandating it, and with that, mandating potential risks.
They found that AI-assisted developers were exponentially more “productive”, outputting more code than other developer peers.
However, they also found they weren’t just shipping faster, but they were also introducing more risks, and not just a few, but 10x as many than their non-AI code assisted development peers.
They found that not only is AI-assisted development leading to more vulnerabilities and risks, but it is doing so exponentially at a rate that continues to accelerate.
They found that while the AI coding tools were great at fixing logic flaws, symantic errors, typos and more, they didn’t do so great at catching or stopping vulnerabilities and security-focused flaws, with privileged escalation paths jumping 332% and architectural design flaws growing 153%.
Rounding out their research, they found that AI assisted development led to the exposure of cloud credentials twice as often, which is alarming when you think about how prevalent credential compromise is in cybersecurity incidents YoY in sources such as Verizon DBIR and others.
All in all, this is another example of research showing the pitfalls and peril of AI driven development when it comes to introducing risks and exposing organizations. This is something all organizations need to grapple with as AI coding becomes more and more the norm.