I co-authored Effective Vulnerability Management in 2024 because I believed the industry’s approach to vulnerability management was fundamentally misaligned with how software was actually being built, deployed, and attacked.
If asymmetry in cost to respond to AI threats is such a great ratio to the cost of conducting the threats, at what point can attackers cost-effectively overwhelm even a mature cybersecurity team, effectively causing a Denial of Service?
We are in a bind for burnout and AI upskilling. The situation is 29% of teams do not have the budget to hire enough people. Even if they do have the budget, 30% cannot find people with the needed skills, with AI being the #1 security team skill needed at 42%. Then only ¼ of them invest in upskilling, but how are the underfunded teams going to invest in upskilling for AI skills?
I am waiting on organizations to get their legal team underwater with the EU AI Act. Then cue all your incoming third-party questionnaires you don't have answers to because your insecure product is a liability to them. I think the industry history shows new threats and regulations create headline costs to organizations so they get scared straight to invest.
Data sourced from the ISC2 Cybersecurity Workforce Study 2025.
Chris one thing I really like about your posts is you always bring the data. Many of us can say we feel this change in our gut, but you show us facts and figures and help us quantify this dynamic situation and I appreciate that.
I'll reiterate my meme: "You can haz better security. You can haz worse security. But you can not haz 'security'. There is no security. Deal."
In AI in particular the attack surface is essentially infinite - much like it is with humans. Which is why it can only be addressed imperfectly using threat modeling.
This has to be addressed in terms of what is feasible and what isn't. The emphasis needs to shift to detection, containment and remediation - not prevention. What prevention is attempted needs to be based on threat modeling.
The only solution to software vulnerability (as opposed to overall security) is redefining what "software engineering" actually requires to produce provably correct software. AI can probably help with that - but LLM generated code won't.
But this will be a sea change in the industry. I frankly doubt it will be an accepted solution. Humans just don't work like that.
If asymmetry in cost to respond to AI threats is such a great ratio to the cost of conducting the threats, at what point can attackers cost-effectively overwhelm even a mature cybersecurity team, effectively causing a Denial of Service?
We are in a bind for burnout and AI upskilling. The situation is 29% of teams do not have the budget to hire enough people. Even if they do have the budget, 30% cannot find people with the needed skills, with AI being the #1 security team skill needed at 42%. Then only ¼ of them invest in upskilling, but how are the underfunded teams going to invest in upskilling for AI skills?
I am waiting on organizations to get their legal team underwater with the EU AI Act. Then cue all your incoming third-party questionnaires you don't have answers to because your insecure product is a liability to them. I think the industry history shows new threats and regulations create headline costs to organizations so they get scared straight to invest.
Data sourced from the ISC2 Cybersecurity Workforce Study 2025.
Chris one thing I really like about your posts is you always bring the data. Many of us can say we feel this change in our gut, but you show us facts and figures and help us quantify this dynamic situation and I appreciate that.
No matter what they do - it won't work.
I'll reiterate my meme: "You can haz better security. You can haz worse security. But you can not haz 'security'. There is no security. Deal."
In AI in particular the attack surface is essentially infinite - much like it is with humans. Which is why it can only be addressed imperfectly using threat modeling.
This has to be addressed in terms of what is feasible and what isn't. The emphasis needs to shift to detection, containment and remediation - not prevention. What prevention is attempted needs to be based on threat modeling.
The only solution to software vulnerability (as opposed to overall security) is redefining what "software engineering" actually requires to produce provably correct software. AI can probably help with that - but LLM generated code won't.
But this will be a sea change in the industry. I frankly doubt it will be an accepted solution. Humans just don't work like that.