Resilient Cyber Newsletter #15
Software Vendors as Cyber Villains, Organizing Security for Digital Transformation, Companies Skip Hardening in AI Rush and the NIST NVD Continues to Crumble
Welcome!
Welcome to another issue of the Resilient Cyber Newsletter, bringing you resources and news covering Cybersecurity Leadership, Market Dynamics, AI and AppSec.
Let’s dive in!
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 7,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
Get more out of your email security budget
When every dollar counts, you want to make the most of what you get. You (hopefully) get funds for anti-phishing tools, but the threat landscape extends beyond the inbox.
With more sophisticated attack flavors at higher volumes than ever, email security must also encompass insider risk scenarios, account takeover protection, and data loss prevention.
See why Material Security is the preferred choice for organizations looking to protect more areas of their Microsoft 365 or Google Workspace footprints under a unified toolkit… and a single line item in the budget.
Cyber Leadership & Market Dynamics
Makers of Insecure Software The Real Cyber Villains
CISA Director took delivered a keynote at Mandiant’s mWise conference this past week declaring software suppliers who ship buggy, insecure code are the real villains in the ecosystem.
This continues her push to point out that we don’t need more security products, but more secure products. She said despite a multi-billion dollar cybersecurity industry we still have a multi-trillion-dollar software quality issue that is leading to a multi-trillion global cyber crime issue.
The commentary goes on to discuss efforts such as CISA’s Secure-by-Design Pledge which I covered in my article “A Digital Scouts Honor: A look at the recently announced Secure-by-Design Pledge led by CISA”.
The coy name of the article of course is due to the fact that the Secure-by-Design pledge is voluntary, with no real consequences for those who don’t abide by it. Jen also called on customers to ask tough questions of their suppliers during procurement and to start to call vulnerabilities defects, since that is what they actually are.
This of course is a nuanced and complex topic, including the problem that we don’t even have a standard definition of what “secure” is, nor how to measure it.
I discuss a lot of this nuance in my article “Software’s Iron Triangle: Cheap, Fast and Good - Pick Two”.
Customers Returning to On-Prem?
In an announcement that came during a hearing the UK about Competition and Markets Authority (CMA), AWS stated customers are looking to reduce their reliance on cloud and revert back to on-prem options.
The concept is referred to as “repatriation”, where customers move workloads back on-premise. Reasons cited include reducing cost, adjusting access to technologies, increasing ownership of their resources, data and security.
2024 FITARA Scorecard
The House Oversight and Government Reform (OGR)’s IT Scorecard has been released, demonstrating FITARA scores for U.S. Federal agencies. It grades agencies across seven key areas including:
Agency CIO Enhancements
Transparency and Risk Management
Portfolio Review
Data Center Optimization Initiative
Software Licensing
Modernizing Government Technology and Cyber
Cloud computing and Cyber continue to be some of the most problematic areas for agencies.
Organizing Security for Digital Transformation
Google Cloud’s Office of the CISO released an excellent paper focusing on how security can be organized to enable digital transformation. It includes shifting from a centralized to a product oriented model, including being a core part of the full SDLC.
They lay out how organizations think transformation goes, versus how it actually happens, involving four stages of: Experimentation and Disintegration, Dissolution, Transformation and Integration and rather than occurring linearly, being an iterative process of jumping back and forth from stages and optimizing in an ongoing fashion.
It also discusses the four stages and how various cyber teams evolve, integration and streamline into post-transformation models among their teams.
This is a good read, with insights from Google Cloud’s interaction with many organizations and security teams. It also shows some of the convergence underway among security teams.
Is Security Living 10-15 Years Behind IT?
That’s the case Maxime Lamothe-Brassard tries to make in his blog titled “It’s Time to Move Cybersecurity Forward”.
Maxime tries to make the case that too many cyber vendors sell products that are black boxes to customers. Operating behind a magic curtain of “keeping customers safe” which providing transparency on how and often operating on promises and claims rather than proof and outcomes.
Problems cited include the fact that customers cannot see what vendors are doing and are denied basic control and management of the tools from the vendors, with vendors making changes without knowing how they will impact customers. He alludes to Crowdstrike, although he doesn’t name them either.
To fix the issue, the author says vendors must:
Stop seeing security tools that deliver protection and instead show HOW you’re protected with actual evidence
Give teams full control of tools to observe, test and validate
Offer practitioners robust API access to manage the tools as they see fit
Build tools using principles such as scalability, CI/CD, IaC etc.
Security Theme Week Takeaways
Andreessen Horowitz (a16z) recently organized a security themed event with dozens of security leaders at Fortune 250 companies. Security startup Resourcely recapped some of the key takeaways in a blog, which included:
Strengthening DevSecOps with Robust Guardrails
Managing Third-Party Risk
Supply Chain Security
Enhancing Developer Experience and Productivity
Compatibility is Critical
Standardization is Key
Exploring Advanced Technologies and Future-Proofing
The article is short and a great read into key topics that are top of mind for Tech and Security leaders at leading organizations.
AI
Companies skip security hardening in rush to adopt AI
In a tale as old as time, organizations are skipping over key security considerations in their rush to adopt AI. As we’ve seen with previous technological waves such as Cloud, companies are making fundamental mistakes or outright ignoring key security considerations in their race to adopt AI.
Cloud security company Orca Security recently published their “2024 State of AI Security Report” and the findings while not necessarily surprising, were depressing and concerning.
Some key takeaways include:
56% of organizations are using AI to develop their own custom applications
Cloud provider default settings are a major security concern:
27% of organizations using Azure OpenAI have their accounts publicly exposed
45% of Amazon SageMaker buckets are using default naming conventions
98% of organizations using Google VertexAI haven’t enabled encryption at rest for self managed keys
Vulnerabilities are widely pervasive, including 62% of organizations using packages with at least one CVE
The report demonstrates that as organizations are quickly adopting AI, including AI services, managed services, cloud offerings, models and more, they are overlooking fundamental security considerations which will undoubtedly lead to future incidents.
So, buckle up, here we go again.
LLM’s and Adversarial Digital Twins
AI Security Startup Knostic published some research showing how they were able to feed an LLM data from a target’s social media account, basically creating a digital twin of the target and then a separate digital persona to interact with them like a best friend.
They then were able to test the effectiveness of the LLM’s to create and carry out advanced social engineering attacks.
This is a really interesting and concerning peek and what is very likely to come from attackers and nation states as they look to use LLM’s to carry out attacks, including social engineering attacks, taking advantage of people, personalities and more.
AppSec, VulnMgt and Software Supply Chain
AWS extends Vulnerability Disclosure Program (VDP) to include HackerOne
AWS has already had a public VDP that paid up to $25k for identified vulnerabilities to security researchers. But they have not partnered with HackerOne, which was announced at the recent “fwd:cloudsec Europe” event, in a talk titled “Doing bad things for the right reasons”, which took a look at the AWS vulnerability disclosure and remediation process.
Here is a great article covering some of the key takeaways, and you can learn more at the AWS HackerOne page.
Given how fundamental AWS is to so many other systems, applications and vendors, this is a great extension of their VDP.
Prioritizing Detection Engineering
Detection Engineering continues to grow in popularity, as organizations look to create, improve and maintain their ability to identify incidents and potentially malicious activities in their environments.
This blog from Ryan McGeehan builds on an earlier blog called “Lessons Learned in Detection Engineering. Ryan lays out key points and considerations for approaching and maturing an organizations Detection Engineering activities:
Detection Engineering's Focus: Detection engineering involves balancing technical capabilities with management and prioritizing detection efforts within a security program. It emphasizes avoiding premature implementation of detection measures that can overwhelm teams.
Implementation Phases:
Phase 1: Logging: Start with minimal, essential logs (e.g., authentication and infrastructure) to respond to common scenarios. Avoid over-collecting logs or building an extensive alerting system early on.
Phase 2: Hardening: Before formalizing detection, focus on securing the environment (e.g., reducing privileges, centralizing authentication) and creating a stable system. Avoid premature staffing of dedicated detection engineers.
Phase 3: High-Quality Alerts: Introduce high-quality detections based on key system invariants (e.g., secrets management, IaaS API usage, honeytokens). Ensure alerts are well-documented, with clear responses and minimal false positives.
Phase 4: Management: Be mindful of detection management before fully committing to detection programs. Address operational management challenges like alert noise, productization, and collaboration between detection engineers and response teams.
Detection Engineering Team: Staffing should be conservative. Detection engineering might not require more than one dedicated person. Cloud-native tools simplify the process, and prematurely growing the team can create management burdens.
Management Risks: If detection engineering is implemented too soon, it can lead to inefficiencies, burnout, and management challenges. Organizations should integrate detection efforts with existing management systems rather than creating redundant structures.
Long-Term Approach: Detection should be integrated into the overall security strategy without overtaking the focus on mitigation. Detection work is attractive but should not replace the collaborative mitigation efforts needed for security.
NIST NVD Continues to Fall Short
In a blog post from Tom Alrich, he demonstrates how the NVD continues to come up short in its role of enriching CVE’s. This includes:
18,000+ unenriched CVE’s from February-September 2024
An enrichment rate of less than 50% of new CVE’s monthly since June 2024, while not even touching a SINGLE CVE from February-May 2024
This is despite NIST making a new contract award for contractor support in May where they claimed they would return to pre-February processing rates of CVE’s, yet four months later… the numbers continue to pile up.
They originally claimed the backlog would be cleared by the end of FY2024, but that is coming up quickly as we close out September, and there is simply no way they will enrich 18k CVE’s, so they fell short of that. Curious if it is the contractor or NIST, or both.
Either way, NVD’s credibility continues to crumble in the security ecosystem.
I covered this NVD saga in great depth in one of my most popular articles ever, titled “Death Knell of the NVD?”
While I, along with many others hoped the NVD would be salvaged, so far it doesn’t seem to be so, as they struggle with the velocity of CVE’s need to be analyzed and enriched.
Ironically enough, they are struggling much like downstream organizations are when it comes to the rate of vulnerabilities they’re expected to handle.
2024 Open Source Maintainer Report
Tidelift released their 2024 Open Source Maintainer Report which has some great insights as always when it comes to the open source software (OSS) ecosystem.
The report always sheds light on the dynamics of the OSS ecosystem and key factors that contribute to downstream risks in OSS, which is used in 70-90% of modern code bases, and powers everything from consumer goods to critical infrastructure.
Some of the highlights include:
Overall the report highlights the fact that the majority of OSS maintainers are unpaid, and that paid maintainers are more likely to actively maintain their projects, fix issues, remediate vulnerabilities, implement new features and overall be more responsive.
Given how much of the modern digital ecosystem relies on OSS, the message is clear, companies need to do more financially to support the OSS projects and maintainers they rely on.
This of course is a well documented problem, and OSS suffers from a “tragedy of the commons”, which is a situation where when many people have unfettered access to a finite value resource, they will overuse and exhaust it, destroying its value.
Except in this case, the resource being exhausted is the OSS maintainers, which is supported by findings such as almost half of maintainers feeling under-appreciated and like their work is thankless and the OSS community aging, with it being sustained by increasingly older maintainers.
One of the best papers I’ve ever read on the topic is from researcher Chinmayi Sharma, who wrote a paper titled “Tragedy of the Digital Commons”.