State of Dependency Management - 2024
A deep dive into the 2024 Endor Labs State of Dependency Management Report and its key findings
Endor Labs recently released their “State of Dependency Management Report” for 2024, making it the organizations third-annual report. It historically has had great insights as they relate to open source, vulnerability management and software supply chain security, and this issue is no different.
So let’s take a look at some of the key findings and takeaways.
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 7,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
Context
For those unfamiliar, dependencies typically refer to external code and libraries that software requires for functional use, and dependency management is effective governance of those dependencies. This topic has increased in importance in recent years due to the surge of OSS usage, as well as widespread software supply chain attacks targeting the OSS ecosystem and widely used third-party dependencies.
The report starts by discussing the significant role that open source and third-party dependencies play in modern software development. This includes an estimated $8.8 trillion dollar savings in development costs, citing a report from Harvard.
However, despite the widespread adoption and use of open source by organizations, most organizations are immature when it comes to identifying their dependencies, understanding vulnerabilities associated with them, and then effectively prioritizing vulnerabilities for remediation without burying developers in toil and noise.
This of course is problematic due to the massive rise of software supply chain attacks, including on open source, as found by others such as Sonatype in their “State of the Software Supply Chain” report, as pictured below:
Cutting Through the Noise
We know that modern vulnerability management requires context.
Teams that take legacy approaches of rallying around simple approaches such as CVSS drown in toil, often using scarce time and resources to remediate vulnerabilities that aren’t known to be exploited, aren’t likely to be exploited, and aren’t exploitable due to not being reachable.
This context comes in the form of leveraging resources such as the CISA Known Exploited Vulnerability (KEV), Exploit Prediction Scoring System (EPSS) and reachability analysis.
These, when coupled with business context such as asset criticality, data sensitivity and internet exposure really help crystalize where vulnerability management capital should be spent.
The report points out that less than 9.5% of vulnerabilities are exploitable at the function level organizations that combine reachability and EPSS see a 98% noise reduction when it comes to vulnerability management.
However, remediation even when provided with this data isn’t always so simple, as major version updates include a breaking change nearly a quarter of the time and minor updates may break a client 94% of the time, with patches causing a 75% chance of disruption.
Broken Vulnerability Database Ecosystem
Another challenge AppSec teams face is the fact that the vulnerability database ecosystem is a mess.
I’ve covered this previously, in articles such as “Death Knell of the NVD” where I explained how the NIST National Vulnerability Database (NVD) was essentially faltering and failing to operate effectively in early 2024, a challenge it still hasn’t recovered from, with tens of thousands of vulnerabilities lacking analysis and enrichment with data such as Common Platform and Enumerations (CPE)’s, which are needed to tie CVE’s to specific products.
Currently the NVD has over 18,000 unenriched CVE’s from February-September 2024, and an enrichment rate of less than 50% of new CVE’s monthly since June 2024 and hasn’t even touched a single CVE from February-May 2024.
This problem isn’t likely to change anytime soon either, as Vulnerability Researcher Jerry Gamblin demonstrated below that the NVD currently is seeing 30% YoY growth in CVE’s from 2023 > 2024, making it no surprise they are struggling to keep up.
The problems with public vulnerability databases is further emphasized in the State of Dependency Management Report.
They highlight how it takes roughly 25 days between when a public patch becomes available and there is an associated advisory publication in the public vulnerability databases used, such as the NVD.
Even then, only 2% of those advisories include information about what specific library function contains the vulnerability and 25% of advisories include incorrect or incomplete data, further complicating vulnerability management efforts.
The delayed security advisory publications into databases such as NVD present challenges for organizations.
As I discussed in my article “Vulnerability Exploitation in the Wild”, sometimes CVE’s see active exploitation attempts as quickly as 22 minutes after a Proof-of-Concept (PoC) exploit has been made available.
This is problematic for various reasons. As documented in Endor Labs’ report, 69% of security advisories had a media delay of 25 days from the time there is a security release, to the time an advisory is published in public vulnerability databases.
This presents an initial lag time from when exploitation may occur, to when typical security scanning tools, which rely on the public vulnerability databases, would even pick up a vulnerability, let alone the time it takes organizations to resolve them, which is another factor.
In fact, in data shared recently from Wade Baker of the Cyentia Institute, vulnerability remediation timelines have an overall average of 212 days, and in some cases longer.
You don’t have to have a degree in math to understand how when exploitation can occur in minutes, and remediation occurs in months - there’s an impedance mismatch.
This further drives home the need for effective vulnerability prioritization and remediation, to try and outpace exploitation activity, and focus on risks that actually matter.
Even the GitHub GHSA, which is another widely used vulnerability database has challenges the report cited, such as 25% of vulnerabilities that have been a CVE and GHSA entry, the GHSA is published 10 or more days after the CVE, which leaves organizations with CPE-mapping gaps when trying to identify what parts of their tech stack are impacted by a specific vulnerability in an OSS component.
This challenge of timelines in terms of vulnerability discovery, fixes being made available and security releases and release notes being published and making it into an advisory are captured in the image below.
The more delay in this timeline, the more risk organizations impacted by the vulnerabilities face, coupled with their own lengthy remediation timelines as I mentioned above.
Endor Labs report also captured the delay using previous studies, as shown below:
Another challenge of the NVD, although not cited in the report, is the lack of native support for Package URL’s (PURL), as an identifier within CVE’s. This would enable vulnerabilities to be tied to specific OSS packages and libraries, making the findings and data more relevant to open source vulnerabilities. This is well documented and discussed, as a fundamental gap of the NVD in a paper from the SBOM Forum titled “Proposal to Operationalize Component Identification for Vulnerability Management”.
This is backed up by Endor Labs report, which found across six different ecosystems, 47% of advisories in public vulnerability databases do not contain any code-level vulnerability information at all, and only 51% contain on or more references to fix commits and lastly only 2% contain information about the affected functions.
This demonstrates most vulnerability databases that are widely used lack granular code-level information about vulnerabilities, and instead often paint with a broad brush lacking the granularity required in modern day software supply chain security with regard to OSS.
Despite these challenges in the NVD, it is still the most widely used vulnerability database. Others have of course gained prominence, such as OSV or GitHub’s GHSA, but their adoption is still not at the scale of the NVD.
This leads to problems where customers and organizations have to try and query and rationalize inputs from multiple disparate vulnerability databases, often with conflicting or duplicative content. This information then needs to be coupled with organizational specific information about vulnerabilities to be actionable, such as business criticality, data sensitivity, internet exposure and more, coupled with vulnerability intelligence we’ve discussed such as known exploitation or exploitation probability.
Another key problem discussed by the report is that of phantom dependencies, where dependencies exist in the applications code search path but aren’t captured in specific manifest files for the applications, leading to shadow dependency risks that are easy to be overlooked by AppSec teams. Organizations can mitigate these risks through the use of analyzing both direct and transitive dependencies in the code search paths, as well as creating dependency graphs.
Remediation Challenges
Identifying vulnerabilities and navigating vulnerability databases is of course only part of the problem, the real work lies in actually remediating identified vulnerabilities impacting your systems and software.
Aside from general bandwidth challenges and competing priorities among development teams, vulnerability management also suffers from challenges around remediation, such as the real potential that implementing changes and updates can potentially impact functionality, or cause business disruptions.
The Endor Labs report found that when moving to non-vulnerable packages for some of the most commonly used libraries, the breaking change rate is as follows:
Major - 100%
Minor - 94%
Patch - 75%
This demonstrates that when teams look to move to non-vulnerable packages they often encounter breaking changes when updating.
As a parallel, in my article titled “Open Source Security Landscape 2024” I dug into Synopsys’ Open Source Security and Risk Analysis Report (OSSRA), which found that 96% of code bases contained OSS, 77A% of all code in the total codebases originated from OSS, 84% of codebases contained vulnerable OSS components and 91% of codebases assessed contained components that were 10 versions or more behind.
This means not only is open source pervasive, as well as its vulnerabilities, and outdated components, but when seeking to update those components, a majority of them are likely to contain breaking changes, as found in the Endor Labs report. That is visualized as captured in the Endor Labs report below, showing the nearest non-vulnerable updates would contain breaking changes in the large majority of cases.
Modern Software Composition Analysis (SCA)
The remainder of the Endor Labs State of Dependency Management Report goes on to discuss the role of SCA when it comes to dependency management. While SCA tools are far from new, traditionally they have focused on Common Vulnerability Scoring System (CVSS) severity scores, which makes sense, given most organizations also prioritize vulnerabilities for remediation, specifically High and Critical CVSS scores.
The problem of course as we know from sources such as the Exploit Prediction Scoring System (EPSS) is less than 5% of CVE’s are ever exploited in the wild. So organizations prioritizing based on CVSS severity scores are essentially just randomly using scarce resources to remediate vulnerabilities that NEVER get exploited, and therefore pose little actual risk.
While scanning tools, including SCA have increasingly begun integrating additional vulnerability intelligence such as CISA KEV, EPSS etc. some have yet to do so, and additionally, most have not added this alongside deep function-level reachability, to show not only what components are:
Known to be exploited
Likely to be exploited
Actually reachable
The Endor Labs report highlights that:
Less than 9.5% of vulnerabilities are exploitable at the function level
This means that without the combination of metrics above, organizations waste tremendous amounts of time remediating vulnerabilities that pose little to no actual risk.
I wouldn’t even go so far as to say they do that, because as I’ve written about previously in articles such as “Sitting on a Haystack of Digital Needles”, most organizations have vulnerability backlogs in the hundreds of thousands to millions.
Couple that with the act of Security beating Developers over the head with lengthy spreadsheets of trash noise findings with little actual context or relevance to risk reduction and it is a recipe for disaster and explains why much of AppSec is a dumpster fire in most organizations.
While the security industry can beat the Secure-by-Design drum until they’re blue in the face and try and shame organizations into sufficiently prioritizing security, the reality is that our best bet is having organizations focus on risks that actually matter.
In fact, the Endor Labs report found:
organizations that combine reachability and EPSS see a noise reduction of 98%.
In a world of competing interests, with organizations rightfully focused on business priorities such as speed to market, feature velocity, revenue and more, having Developers quit wasting time and focus on the 2% of vulnerabilities truly present risks to their organizations is monumental.
This pivot to noise reduction offers the opportunity to breakdown longstanding challenges between Security <> Developers, help security be a real business enabler, but best prioritizing resources, and enable secure outcomes for our digital ecosystem.
I strongly recommend giving the full report a read here.