Most organizations deploying AI today cannot answer a deceptively simple question. Which model is actually running in their environment?
It is not a hypothetical concern. Model substitution, supply chain compromise, adversarial fine-tuning, and jurisdictional compliance gaps are all live risk vectors — and the industry has largely been relying on contractual guarantees from AI vendors rather than technical controls to address them.
That gap is exactly what Project VAIL was built to close.
In this episode I sat down with Manish Shah, Co-founder and CEO of Project VAIL (Verifiable Artificial Intelligence Layer). Manish is a repeat founder with 20+ years of company building experience, including as co-founder of LiveRamp, and he is now bringing that background to one of the most consequential unsolved problems in AI security, provably knowing and verifying which model is executing in your environment at runtime.
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 31,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
VAIL’s approach combines two core technologies. Behavioral fingerprinting creates a unique, verifiable identity for AI models based on how they actually behave during inference, without relying on access to model weights or architecture. ZkTorch, developed in collaboration with researchers at UIUC, brings zero-knowledge proofs to large generative AI models for the first time at practical scale, enabling cryptographic verification of model computations without exposing sensitive model internals.
We covered a lot of ground in this conversation, including:
Why behavioral fingerprinting is a fundamentally different and more resilient approach to model identification
How model identity becomes a critical security primitive as agentic AI deployments expand
Detecting prohibited and derivative models, including open-source models derived from Chinese-origin foundations like DeepSeek and Qwen
Where frameworks like NIST AI RMF and the EU AI Act fall short on model verification requirements
How verified model fingerprints fit into zero-trust architectures for AI systems and agentic workflows
What standardization for verifiable AI needs to look like and which bodies should be driving it
Model verification is not a niche research problem. It is becoming a foundational requirement for AI governance, compliance, and security in regulated industries and high-stakes deployments alike.
This episode gives you both the technical grounding and the strategic context to understand why.









