Resilient Cyber Newsletter #10
Security Tool Sprawl, Wiz Achieves FedRAMP Moderate, Cyber Gatekeeping & A Tidal Wave of AI-driven Insecure Code
Welcome!
Welcome to another issue of the Resilient Cyber Newsletter - hard to believe we’re now hitting the double digits.
A lot of great resources for you to this week, from Cybersecurity Leadership, AI, and AppSec, so let’s dig in! 👇
CI/CD Security Best Practices
Looking to bolster your CI/CD pipeline?
This comprehensive guide provides you with actionable best practices to mitigate CI/CD security risks. Unlock this resource to learn about infrastructure security, code security, access, and monitoring.
In each section, you’ll find technical background information, actionable items, code snippets, and screenshots, empowering you to take a holistic approach to fortifying your CI/CD pipelines.
In this 13 page cheat sheet Wiz covers best practices in the following areas of the CI/CD pipeline:
Infrastructure security
Code security
Secrets management
Access and authentication
Monitoring and response
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 6,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
Cybersecurity Leadership & Market Dynamics
Security Tool Sprawl Getting Worse - Not Better
There has been a lot of talk about “Platformization”, being led by industry leaders such as Palo Alto and others making a push to get organizations to rationalize their security tool portfolios. There are a lot of merits to the push too, as tool sprawl can lead to cognitive overload, cost, complexity and increased attack surface.
However, the narrative of consolidation as it turns out is just that, as data indicates tool sprawl is getting worse, not better. Some estimates put the average number of security tools at 70-90 across organizations. A recent survey even demonstrated that 51% of folks expect to increase the number of tools/vendors in their stack, not decrease them, of which only 9% said they will be doing.
This article from siliconANGLE is a good one, diving into the factors leading to security tool sprawl, from increased attack surface, changing threats, technology evolutions and more.
To put it short - in security, we’re a mess, and it is only going worse.
Wiz Achieves FedRAMP Moderate
Big news for the Wiz Federal team this week as they announced their FedRAMP Moderate Authorization. Much like Wiz set records for ARR growth and timelines they became the fasted company to be listed FedRAMP Moderate Authorized on the FedRAMP Marketplace.
This is a testament of not only Wiz’s caliber of product and capability, but also their organizational commitment to the U.S. Public Sector, and they’ve built a great team of leaders such as Dean Scontras, Mitchel Herkis, Chris Saunders and more, all of whom boast significant experience serving the U.S. Government space.
For those not familiar FedRAMP is the compliance authorization process by which Cloud Service Offerings from CSP’s get approved for use within the U.S. Federal ecosystem.
This now grants the U.S. Government and Moderate workloads access to an industry leading Cloud Native Application Protection Platform (CNAPP) in Wiz, which is timely as the Government continues to move more of its systems and data to the cloud.
Wiz mentioned they also are pursuing Impact Level (IL) 4 as well, for the DoD community.
The Fallacy of Cybersecurity “Gatekeeping”
My friend Ross Haleliuk coming in with a spicy hot take tackling the topic of Cybersecurity “gatekeeping”.
We often hear stories or posts from folks lamenting the concept of gatekeeping in cybersecurity. It generally revolves around someone either not getting a role or position they wanted, or others taking to platforms to claim that there is an arbitrary force at play keeping people excluded in some capacity in the cybersecurity workforce.
The commentary often comes from social media influencers or those looking to garner support from the community to fight some invisible oppressive force that is keeping folks from the role or position they’re after in the security career field.
Ross’ post lays out the fact that the problem is actually more complex than some invisible force or collusive activity that is keeping people out or down. He discusses factors such as the high risk/impact nature of security jobs, limited team sizes and budgets, the fact that security isn’t an “entry level” job, since it requires prerequisite knowledge (this one will really get people fired up), and more.
He also discusses that security isn’t unique and more other areas of the business and technology suffer from broken talent pipelines and workforce woes. While I do agree that we have many self-imposed workforce constraints from broken hiring processes, inefficient position descriptions and more, it isn’t as simple as blaming someone else. In fact, in life, it is often easy to blame someone else when we didn’t get what we want, or something didn’t go our way.
That’s certainly one route we can take, but it usually doesn’t help, nor get us where we want to go. The best option is understanding the ecosystem, optimizing your ability to navigate it and truly doing the hard work to achieve what you’re after.
Forbes Cloud 100 - Cybersecurity Stands Out
Forbes recently published their “Cloud 100” list, which captures the world’s top private cloud computing companies. While some AI, Fintech and other interesting companies made their way on and up the list, what was awesome to see was the cybersecurity presence.
This was captured by Cole Grolmus, of Strategy of Security, who showed there were 17 cybersecurity companies on the list, each of which had some amazing metrics around ARR, acquisitions and market growth.
Cybersecurity and the Ostrich Algorithm
We all likely have heard of analogies of burying your head in the sand, hoping problems will go away, much like the myth associated with ostriches.
The analogy applies in finances, health, relationships and yes, cybersecurity. This is a good article discussing how it applies in cybersecurity, and the importance of activities such as threat modeling, to identify and address security risks.
Threat Modeling is a process aimed at addressing cyber threats proactively and is fundamental to the concept of Secure-by-Design, building security in or any other mantra that emphasizes embedding security into the processes we use to design, develop and deploy digital systems.
The author discusses how in security, we often choose to ignore threats due to reasons such as them seeming unlike, the solution being costly/cumbersome, or a lack of experience and expertise on how to deal with it.
This leads to threats being ignored, much like sticking out head in the sand, and leaning into the “subtle lure of ignorance” as the author calls it.
It’s a vicious cycle, leading to threats piling up, being exploited, and impacting the organization.
AI
GitHub Code Scanning Autofix
GitHub announced a feature within GitHub Advanced Security (GHAS) that is in beta mode, which can utilize AI to speed up vulnerability fixes when coding. It is powered by GitHub Copilot and CodeQL and claims to deal with 90% of alert types in JavaScript, Typescript, Java and Python.
The announcement mentioned it can address more than 2/3 of vulnerabilities when coding with little to no manual editing required. They did however caution that developers should still check the outputs to ensure it doesn’t break functionality or only partially address vulnerabilities.
That said, this is a great example of the promising potential of AI to disrupt vulnerability management, software development and cybersecurity.
An AI-driven Tidal Wave of Vulnerable Code
We’ve already seen accelerated development cycles with the shift to agile, DevOps and automation. The adoption of GenAI and LLM’s is currently accelerating the pace of software development, and with it, inherently insecure buggy code.
At least that is the claim of Chris Wysopal and companies like Veracode, who produce some excellent software security reports in my opinion. In this article they discuss the already existing challenges of organizations producing secure code or keeping up with the pace of bugs and vulnerabilities, and how the adoption of GenAI/LLM and AI more broadly is only accelerating the problem.
The article cites studies showing that code generated by tools like Microsoft Copilot is 41% more likely to contain security vulnerabilities. These metrics align with studies from Snyk, and others I have come across as well. It also points to the fact that developers will increasingly outsource software development
It also cites studies from Purdue showing that despite the software acceleration from GenAI, tools like ChatGPT are wrong over half of the time when it comes to diagnosing coding errors and human developers tend to lean into the ChatGPT outputs without verifying if they are actually accurate.
There is the acknowledement that GenAI and LLM’s are largely writing insecure code due to the fact that the training data is based on large swaths of existing code, which itself is vulnerable.
Garbage in, garbage out.
NIST Produces a Series of Tools/Guidance on AI Safety and Security
In response to the AI Executive Order (AI-EO), NIST has produced a series of tools and guidance when it comes to AI safety and security.
These include:
Preventing Misuse of Dual-Use Foundation Models
Testing How AI Systems Models Respond to Attacks
AI RMF GenAI Profile
SSDF Practices for GenAI
A Plan for Global Engagement on AI Standards
AI - What Could Go Wrong?
While there is a lot of excitement about AI, including cybersecurity use cases, many are also asking, what could go wrong?
As it turns out, quite a lot, or at least that is how it appears in a MIT database that aggregated over 700 potential AI risks.
It includes items from bias, hallucinations, addiction and even the ability to be used for weapons and more. While this seems like a broad and non-specific data-set, it is still interesting to explore.
AppSec, VulnMgt and Software Supply Chain Security
How Container Vulns Get Fixed
Vulnerability management can be a complex topic. This is even more so the case when dealing with containerized workloads, Kubernetes orchestration and modern cloud-native environments.
In this article from Latio Tech, James digs into how container vulnerabilities actually get fixed and what is going on under the hood.
James dives into the complexity of vulnerabilities in containers, the challenges it causes developers, and the need to truly optimize vulnerability prioritization beyond sources such as KEV and EPSS to provide actual runtime context.
I’d also like to personally give a shoutout to teams such as RAD Security who are providing Kubernetes Detection and Response (KDR), and moving beyond signatures and utilizing behavioral fingerprints and runtime verification.
Resolution Paths Replacing Risk Remediation?
We have talked in many issues of Resilient Cyber about the growing and complex vulnerability management problems across the ecosystem. This is a thought provoking piece from startup ZEST Security that discusses the need to move from risk remediation to resolution paths.
It touches on concepts that visibility not being security, and how we have a remediation problem. The security landscape is flush with tools to help identify and prioritize risks but what actually fixing them? That is an area due for disruption, especially as teams have thousands of vulnerabilities and risks documented, but lack the capacity and capabilities to actually address them, so they just keep piling up in an never ending vulnerability backlog or risk register.
ZEST discusses the need for resolution paths to serve as preventative measures for existing risks and also from having them occur again, or in other words, the need to address issues at the root.
A Complete ASPM?
One term that has become trendy is Application Security Posture Management, or in short, “ASPM”. However, the term often means something different to everyone you ask, largely in part due to the complexity of the modern application and security landscape and the attack surface and considerations involved.
This particular article is the first of a three part series, and focuses on Pipeline Security within the context of a broader ASPM solution.
The four aspects of pipeline security covered in the article include:
Secrets Detection
CI/CD Security
Source Code Leakage
Build Integrity
We’re increasingly seeing the role that SDLC/Pipeline Security plays in software supply chain security, primarily highlighting the fact that it isn’t just about secure code, but securing the underlying infrastructure, systems and processes that facilitate its creation.
I covered this topic in great depth in my book Software Transparency.
Major GitHub Repos Leak Access Tokens, Putting Code and Clouds at-Risk
Last week it broke that a researcher from Palo Alto Network found secrets within the artifacts of dozens of public repos from some of the biggest companies in the industry, including Google, Microsoft, AWS and more.
The artifacts included GitHub tokens, which can be used to inject malicious code through pipelines or access secrets within the repos.
This is yet another incident highlighting the need to protect and secure not just software, but the underlying infrastructure, systems, and processes that facilitate its development and deployment.
I’ve covered this in previous books, as well as my article “NIST Provides Solid Guidance on Software Supply Chain Security in DevSecOps”.