Jack Cable went from shaping national cybersecurity policy at CISA to founding Corridor to tackle what might be AppSec’s biggest inflection point, a world where AI agents write the majority of enterprise code.
We talk about why shift-left was never enough, what Agentic Security Coding Management actually means, and how you govern code that no human wrote.
Prefer to listen?
SHOW NOTES
The AppSec playbook is being rewritten in real time.
Coding agents are shipping pull requests faster than security teams can triage findings. Vulnerability backlogs that were already unmanageable are about to get worse. And the tooling market is exploding with new vendors while CISOs struggle to tell governance platforms apart from glorified scanners.
This week we sit down with Jack Cable to make sense of all of it. Jack was a Senior Technical Advisor at CISA, where he helped architect the Secure by Design initiative that pushed software vendors to take ownership of security outcomes rather than offloading risk to their customers. Now he’s the founder of Corridor, a company building at the center of a category he’s helping define: Agentic Security Coding Management.
We cover a lot of ground in this conversation:
The origin story. What Jack saw inside the federal government that convinced him the next major security challenge was AI-generated code, and why he left to build a company around it.
The shift-left reckoning. A decade of shifting security left hasn’t solved the vulnerability backlog. Jack makes the case that coding agents don’t just stress-test the shift-left model, they might break it entirely, and explains what has to replace it.
AI as attacker and defender. There’s an uncomfortable duality in the current moment: AI is generating insecure code at unprecedented speed while also being pitched as the fix. Jack walks through how he thinks about that tension and where the line is between legitimate AI remediation and probabilistic guessing stacked on probabilistic guessing.
The frontier labs in AppSec. Anthropic, OpenAI, and Google are all showing up in the application security conversation. Jack shares his read on whether they’re partners, platforms, or eventual competitors to startups in the space, and what it means for durable moats.
Buyer confusion. The AI code security market is crowded and noisy. Jack talks about the most common misconception he hears from CISOs and the question they should be asking every vendor but aren’t.
Governance at enterprise scale. When thousands of developers are running Cursor, Claude Code, and internal agents simultaneously, the governance problem stops looking like code review and starts looking like supply-chain control. Jack lays out what real governance looks like today, policy enforcement, provenance tracking, runtime attestation, and what’s still aspirational.
The regulatory horizon. Drawing on his CISA background, Jack shares where he sees policy landing on AI-generated code: liability frameworks, mandatory disclosure, and the risk of getting regulation either too heavy or too absent.
Links and resources:









