Discussion about this post

User's avatar
richardstevenhack's avatar

"if an autonomous platform can deliver continuous exposure validation at $6,000 a pop"

As Han Solo said: "Well, that's the real trick now, isn't it? And it will cost you extra!"

Because the remaining 16% of uncaught bugs will only be caught by humans.

And that's assuming the 84% are legitimate bugs.

There is the "AI slop" issue, which is why many projects are trying to ameliorate the garbage "vulnerability reports" that AIs hallucinate, which are swamping those projects and causing some to curtail their bug bounty programs.

In other words, stop with the hype, already.

Here's the bottom line: AIs are INHERENTLY unreliable and insecure, by nature of the technology they are built on. They can only be made somewhat reliable and secure by imposing deterministic constraints on them. And that applies to any and all uses of AI, whether coding, security, or simply answering questions.

The biggest growing market in the near future will be "AI cybersecurity" - cybersecurity and risk management applied to AIs themselves.

Eric Sherrill's avatar

I think the intuitively obvious next step would be to have the AI agents propose, and then possibly eventually enact, the patching or reconfiguration necessary to fix the vulnerabilities. You mentioned this in the case of DARPA’s AI challenge but I don’t see details for other cases. Clearly though this goes past the point of the standard OffSec / pentest engagement. Thoughts?

2 more comments...

No posts

Ready for more?