Cybersecurity First Principles & Shouting Into the Void
The market failure of cybersecurity won’t fix itself on a voluntary basis
Last week we discussed the recently released publication from the Cybersecurity Infrastructure Security Agency (CISA), which focused on Secure-by-Design/Default approaches to software and product development in an article I titled “The Elusive Built-in Not Bolted-on” of cybersecurity and can be found here.
As pointed out in the article, the CISA Secure-by-Design publication uses the word should over 50 times. Explaining how software and technology suppliers should make Secure-by-Design/Default a focal point of their product design and development process.
Many of the recommendations made are indeed solid cybersecurity recommendations.
That said they are also recommendations that many of us in the industry have been making for decades to our development and business peers who are developing software and applications, as well as to companies in the industry creating software and products.
The uncomfortable reality for cybersecurity as an industry and society is that you don’t get what you ask for, you get what you tolerate.
So far, we’ve tolerated insecure products, insufficient security controls/measures and a mountain of security incidents, exposing the sensitive data of nearly the entire population in some shape or another.
As leaders have pointed out, with the integration of software into nearly everything we do, the potential risks will continue to escalate beyond simple data exposure to real safety concerns such as impacting critical infrastructure, health/medical care, national security and more, as cited in the recent National Cybersecurity Strategy (NCS), which we will discuss more below.
We’ve heard a lot about building security in, rather than bolting it on, and despite the latest form that the phrase has taken of “shifting security left” in our Cloud-native DevSecOps paradigm, this concept has been around for literally decades.
As pointed out by esteemed longtime security leader Rick Howard in his latest book “Cybersecurity First Principles” (which I recommend checking out - pictured below), the concept can be traced to sources as early as the 1972 paper titled “Computer Security Technology Planning Study” by James Anderson of the U.S. Air Force (USAF) (being former USAF myself, we’re always ahead of the curve, sorry Navy/Army/Marine Corps).
This publication of course draws on its predecessor, Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security - aka “The Ware Report”, which discusses the problems of securing computer systems.
Despite over 50 years passing since the publication of these papers, as an industry we are yet to see the elusive “built-in” model become the operating model for our industry, with security largely still occurring as a bolted on, late to the SDLC party activity, and the exposure of billions of individuals data to accompany it.
Market Factors and Cybersecurity’s First Principle
While many like to hope that the market will sort itself out, and for every FUD-based story of reputational harm to firms and financial impacts, the harsh reality is that the majority of organizations that suffer a cybersecurity incident go on to recover, see share prices rebound and even climb post-incident.
For every rare example of companies that are entirely put out of business due to a cybersecurity incident, there are exponentially more who experience an incident and largely go on just fine, or better.
Several examples point out incidents that have impacted organizations such as Home Depot, JP Morgan Chase, Target didn’t have long term or lasting impacts on their share prices, and others show that despite ransomware attacks and a short-lived dip, firms share prices often climb above industry averages after an incident. For a deeper dive on the topic, check our Kelly Shortridge’s article “Markets DGAF About Cybersecurity”, where she also cited studies that have been presented to support this unfortunate reality.
First principles are a core concept in any discipline, including cybersecurity.
A first principle is a foundational proposition or assumption that stands alone. We cannot deduce first principles from any other proposition or assumption.
In Rick Howards Cybersecurity First Principles that I mentioned above, Rick lays out a notional roadmap for cybersecurity’s first principle, which can be seen at the foundation of the image below.
This means that cybersecurity’s first principle, theoretically, and of which I am sure many practitioners would agree with, is to “Reduce the Probability of Material Impact”.
If the term “material impact” sounds familiar, it means you’ve been paying attention to the cybersecurity industry as of late, as the phrase has specifically gotten extensive use by sources such as the Securities and Exchange Commission (SEC).
The SEC has recently proposed a rule titled “Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure”, which focuses on the requirement of reporting related to material cybersecurity incidents, among other emerging requirements.
The proposed rule discusses the rising costs of cybersecurity incidents for businesses and discusses adverse impacts such as:
Costs due to business interruption, decreases in production, and delays in product launches
Payments to meet ransom and other extortion demands
Remediation costs, such as liability for stolen assets or information, repairs of system damage, and incentives to customers or business partners in an effort to maintain relationships after an attack
Increased cybersecurity protection costs, which may include increased insurance premiums and the costs of making organizational changes, deploying additional personnel and protection technologies, training employees, and engaging third-party experts and consultants
Lost revenues resulting from intellectual property theft and the unauthorized use of proprietary information or the failure to retain or attract customers following an attack
Litigation and legal risks, including regulatory actions by state and federal governmental authorities and non-U.S. authorities
Harm to employees and customers, violation of privacy laws, and reputational damage that adversely affects customer or investor confidence
Damage to the company’s competitiveness, stock price, and long-term shareholder value.
While many of these can theoretically and do occur due to a security incident, as we’ve pointed out above, things such as a loss of market share, customer attraction, competitiveness and so on rarely actually occur, at least not with a significant enough impact to drive lasting change by businesses in terms of prioritization, investment and initiative around secure software and product development.
Hence, why many consider cybersecurity to be a market failure, which cannot be resolved on a voluntary basis or guidance of “should do” statements and recommendations.
Until the “should do’s” become “must do’s”, we’re unlikely to see systemic behavioral change among organizations.
Considering years of evidence that the scope and scale of things such as market competitiveness, share price and investor confidence don’t seem to be significant enough to drive behavioral change among software suppliers and organizations, this instead leaves items such as harm to customers, and regulatory consequences.
The National Cyber Strategy (NCS) and Shaping Market Forces
One of the most effective ways to mitigate material impacts from cybersecurity incidents, as acknowledged by the recently released National Cybersecurity Strategy (NCS), which we previously covered extensively here, is to embrace security and resilience by design. This is also emphasized in the CISA publication of Secure-by-Design/Default, which we mentioned at the onset of this article.
In the context of the software supply chain and society, few are better positioned to do this than the manufacturers and suppliers of technology, as well as large well-resourced and capable organizations who are wielding technology to facilitate business outcomes while concurrently serving as stewards of our digital ecosystem and the data of millions of individuals.
A big emphasis of the NCS was Pillar Three, which was “Shaping Market Forces to Drive Security and Resilience”. The NCS recognizes that market forces alone have proven historically insufficient to drive best practices related to security and resilience.
While the SEC’s proposed rule does cite examples related to regulatory actions from state and federal entities, those actions have as of now, proven insufficient to drive systemic behavioral change when it comes to cybersecurity investment and prioritization, leading to an ecosystem rife with insecure and vulnerable digitally enabled products and what seems like daily headlines related to the exposure of individuals sensitive data.
Therefore the material impact isn’t often bore by the firms developing and wielding the technology, but instead the customers and citizens whom data they possess, and ultimately, society as a whole.
While these impacts may often be benign and tied to sensitive data exposure, as we see software increasingly integrated into all facets of society such as our homes, medical devices, critical infrastructure and national security, those material impacts of course hold the potential to exponentially impact things such as the privacy and safety of society.
Again looking to the NCS, as it points out, when organizations that possess individuals personal data fail to act as good stewards of protecting the data (e.g. producing secure and resilient software and products), they externalize those costs onto consumers and citizens. Meaning the material impact is generally bore by citizens, not the organizations/stewards of the data or technology producers and suppliers.
To elevate cybersecurity’s first principle, as advocated by Rick, which is reducing the probability of material impact, we will need to see a shift in accountability and liability, for those serving as data stewards in our digital ecosystem as well as those producing software and digitally enabled products.
This is where strategic objective 3.3 “Shift Liability for Insecure Products and Services” from the NCS is so important, and plainly states:
Forthcoming efforts to shift liability for insecure software products and services and to establish reasonable precautions will theoretically help to drive systemic change.
Legislation to establish liability will escalate the material impact of regulatory actions, forcing organizations to make changes to the way they collect, store and protect personal data, as well as how they develop software, products and services.
The challenge here of course will involve things such as determining what reasonable precautions actually are, establishing a mechanism for validating that the precautions were actually taken (e.g. self-attestation or third-party verification), as well as defining protections, often called “safe harbor” provisions to protect organizations that did indeed implement reasonable precautions but experience a cybersecurity incident nonetheless, because as the NCS acknowledges, even the most advanced security programs cannot prevent all vulnerabilities.
Cybersecurity as a Business Enabler
We often hear lines about how cybersecurity is a business enabler. There is of course a lot of truth to that statement, especially as nearly every company is increasingly becoming a software company that just happens to have competencies in “x” (whatever their revenue generating activities are, most of which are now powered by software).
That said, cybersecurity does inevitably impose additional costs in the forms of mitigations, investments in secure development, research, and potentially delayed speed to market, among others.
However, when, and if, regulatory measures do evolve to shift liability as the NCS states and hold firms sufficiently accountable for proper data stewardship and secure software and product development, it inevitably increases the validity of the concept of cybersecurity as a business enabler, as well as Rick’s proposed cybersecurity first principle of reducing the probability of material impact, since regulatory measures are among the potential material impacts cited from the SEC as related to cybersecurity incidents.
That said, until regulatory measures related to cybersecurity incidents reach a level of materiality that spurs a commensurate organizational response when it comes to cybersecurity investment and prioritization, we are unlikely to see the the market failure of cybersecurity voluntarily address itself, and therefore, we are shouting into the void.