A wave of AI-native startups is transforming penetration testing from a manual, snapshot-in-time exercise into continuous, autonomous exposure validation and the timing couldn’t be more critical.
"if an autonomous platform can deliver continuous exposure validation at $6,000 a pop"
As Han Solo said: "Well, that's the real trick now, isn't it? And it will cost you extra!"
Because the remaining 16% of uncaught bugs will only be caught by humans.
And that's assuming the 84% are legitimate bugs.
There is the "AI slop" issue, which is why many projects are trying to ameliorate the garbage "vulnerability reports" that AIs hallucinate, which are swamping those projects and causing some to curtail their bug bounty programs.
In other words, stop with the hype, already.
Here's the bottom line: AIs are INHERENTLY unreliable and insecure, by nature of the technology they are built on. They can only be made somewhat reliable and secure by imposing deterministic constraints on them. And that applies to any and all uses of AI, whether coding, security, or simply answering questions.
The biggest growing market in the near future will be "AI cybersecurity" - cybersecurity and risk management applied to AIs themselves.
"The idea behind this move is to tackle the growing problem of AI tools generating security findings (both legit and hallucination ones) at a scale open source maintainers simply cannot keep up with."
I think the intuitively obvious next step would be to have the AI agents propose, and then possibly eventually enact, the patching or reconfiguration necessary to fix the vulnerabilities. You mentioned this in the case of DARPA’s AI challenge but I don’t see details for other cases. Clearly though this goes past the point of the standard OffSec / pentest engagement. Thoughts?
Yes sir, it definitely extends well beyond OffSec into every category of Cyber (e.g. AppSec, SecOps, GRC etc.) and many AI native firms are looking to disrupt these markets via agents and automation of traditional manual services and labor as well!
"if an autonomous platform can deliver continuous exposure validation at $6,000 a pop"
As Han Solo said: "Well, that's the real trick now, isn't it? And it will cost you extra!"
Because the remaining 16% of uncaught bugs will only be caught by humans.
And that's assuming the 84% are legitimate bugs.
There is the "AI slop" issue, which is why many projects are trying to ameliorate the garbage "vulnerability reports" that AIs hallucinate, which are swamping those projects and causing some to curtail their bug bounty programs.
In other words, stop with the hype, already.
Here's the bottom line: AIs are INHERENTLY unreliable and insecure, by nature of the technology they are built on. They can only be made somewhat reliable and secure by imposing deterministic constraints on them. And that applies to any and all uses of AI, whether coding, security, or simply answering questions.
The biggest growing market in the near future will be "AI cybersecurity" - cybersecurity and risk management applied to AIs themselves.
Example:
AI Companies Put $12.5M Into Open Source Security to Fix a Problem Their Tools Helped Create
https://itsfoss.com/news/ai-companies-fund-open-source-security/
"The idea behind this move is to tackle the growing problem of AI tools generating security findings (both legit and hallucination ones) at a scale open source maintainers simply cannot keep up with."
I think the intuitively obvious next step would be to have the AI agents propose, and then possibly eventually enact, the patching or reconfiguration necessary to fix the vulnerabilities. You mentioned this in the case of DARPA’s AI challenge but I don’t see details for other cases. Clearly though this goes past the point of the standard OffSec / pentest engagement. Thoughts?
Yes sir, it definitely extends well beyond OffSec into every category of Cyber (e.g. AppSec, SecOps, GRC etc.) and many AI native firms are looking to disrupt these markets via agents and automation of traditional manual services and labor as well!