The Elusive Built-in not Bolted-on
A look at CISA's "Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default" publication
There are few phrases more ingrained and repeated in the cybersecurity industry than “built in, not bolted on”.
A quick Google search will show you countless articles using the phrase, and its origins date back several years, or even decades. That said, we are still far from a point where cybersecurity is considered a business imperative and not just a technical focus, but something that is ingrained from the onset of software and product development lifecycles.
The concept of shifting the balance of cybersecurity risk has been a prominent aspect of the public dialogue around cybersecurity and software as of late.
It was a key focus in the recent Foreign Affairs article by the Cybersecurity and Infrastructure Security Agency (CISA)’s Jen Easterly and Eric Goldstein, who serve as the CISA Director and Executive Assistant Director for Cybersecurity at the agency respectively.
It was a concept also prominently featured in the recently released National Cybersecurity Strategy (NCS), which I covered in a previous article here and in a panel with WashingtonExec here, with one of the two key shifts the strategy emphasizes being “Rebalance the Responsibility to Defend Cyberspace”, which states that the most capable and best positioned actors in cyberspace must be better stewards of the digital ecosystem.
In economic terms, this is dubbed the “least-cost avoider” and is used by researcher Chinmayi Sharma in her work (here, and here) discussing the open source software (OSS) ecosystem, its role in modern critical infrastructure and society and the software supply chain. (I interviewed Chinmayi in-depth on these topics, and it can be found on the Resilient Cyber Podcast, here). It is also discussed in this Lawfareblog.com article titled “Cybersecurity and the Least Cost Avoider” by Paul Rosenzweig from back in 2013.
The overarching concept is that software and technology suppliers and vendors are best positioned to drive down systemic risk and fix vulnerable software/products by prioritizing cybersecurity alongside other business driving factors such as speed to market and profitability rather than making downstream consumers and citizens bear the cost of software failures and incidents tied to insecure products and applications, which is largely the model we live in now.
CISA recently released a publication titled “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default”, which we will be discussing below.
Vulnerable by Design
The paper opens discussing how software is pervasive in nearly every aspect of modern life and how insecure software has and can cause disruptions and safety concerns from things such as medical care, utilities, critical infrastructure and even national security.
It discusses how technology manufacturers should take a Secure-by-Design/Default approach when designing technology and software driven products. It even cites efforts in the EU, such as the Cyber Resilience Act which strives to encourage producers to ensure they deliver secure products to the market, rather than a proliferation of vulnerable products that helped contribute to our current ecosystem of systemic cyber risk and has groups such as the World Economic Forum (WEF) predicting a “catastrophic” cyber attack in the next two years.
To encourage this shift, CISA calls out two key terms:
Secure-by-Design - These are systems/products where security of customers are business goals are not only technical features and includes security from the onset of the software/system development lifecycle (SDLC).
Secure-by-Default - These are systems/products that are secure out-of-the-box, rather than needing to be hardened and configured securely to be used by end consumers/users without incurring excessive risk.
The document walks through each of these terms in more depth, so lets do the same.
Before diving in, I wanted to give a plug for a book titled “A Vulnerable System: The History of Information Security in the Computer Age” by Andrew Stewart. It is one of the most comprehensive books on cybersecurity I’ve encountered and demonstrates just how long experts and cybersecurity professionals have been advocating for the below concepts (hint: it’s been a long time!)
Secure-by-Design
The document goes on to define Secure-by-Design as
Technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.
The section goes on to discuss sound and longstanding cybersecurity best practices such as risk assessments and defense-in-depth. It also recommends activities such as Threat Modeling during product development to address threats and potential vulnerabilities in system design and development. It even specifically calls out “building security in” versus “bolting it on”, as we have often heard called for in cybersecurity, for many years. (In the age of DevOps/DevSecOps this has taken the form of “shifting security left”)
A case is made that while there are, and will be, costs associated with more rigorous security upfront, those costs come at the tradeoff of improved security posture for customers, organizational brand reputation and theoretically cost savings in post production, deployment and distribution of products, such as patching and maintenance, along of course with reduced risks of security incidents.
Where industry will inevitably take issue with these concepts is in the terminology, such as “reasonably protects”.
What does it mean to reasonably protect against malicious cyber actors?
If you ask a diverse pool of security and technology professionals, you will inevitably get an equally diverse number of responses on what reasonable protection is.
Another common retort is that increased security efforts have other impacts, such as delayed development and deployment timelines, delays in speed to market and increased costs. Those delays may hinder things such as sales and market share and the increased costs typically get passed on to consumers in pricing, which also may hinder sales, especially in a society that unfortunately doesn’t seem to value secure digital products, at least not with their pocketbooks and spending patterns nor their emphatic volunteering of sensitive data to applications and technology conglomerates. There’s also the reality that everyday citizens just aren’t cyber-aware or in a position to know what products are secure or not.
Some professionals such as Kelly Shortridge have even pointed out how firms stock prices don’t seem to suffer, at least not over moderate to long periods of time post security incident or data breach, such as in her article “Market’s DGAF About Cybersecurity”. Many of her points are supported by other researchers with regard to stock prices after a security incident, much to the dismay of many cybersecurity professionals who often cite “brand and reputational damage” as risks associated with poor cybersecurity.
Those retorts aside, there’s no denying that there is a case to be made for Secure-by-Design products and ensuring that vulnerable technologies and systems aren’t just passed downstream to consumers, the overwhelming number of which have no real knowledge or expertise when it comes to cybersecurity of the products they use. This concept of building security in, and shifting security left isn’t just being advocated by Federal agencies, but the cybersecurity industry itself too.
Secure-by-Design products also would benefit a large population of businesses who live below what as an industry we call the “cybersecurity poverty line”, a term coined by industry leader Wendy Nather to define organizations capable of implementing measures to stay secure, and those who can’t, with the latter making up the bulk of the industry, and therefore falling below the cybersecurity poverty line.
Secure-by-Default
Moving on from Secure-by-Design, the next section focuses on Secure-by-Default.
“Secure-by-Default” means products are resilient against prevalent exploitation techniques out of the box without additional charge. These products protect against the most prevalent threats and vulnerabilities without end-users having to take additional steps to secure them.
Critics of course again will ask who/what determines the “prevalence” of exploitation techniques, and how do we provide additional security without additional charge, given that secure engineering and development of course takes labor hours and investments, equating to expenses to the supplier, of course of which often get passed on to consumers. To the former argument, there is no shortage of industry reporting and research citing the most prominent form of exploits and attacks, such as the notable Verizon Data Breach Investigation Report (DBIR), among many others.
Nonetheless, CISA makes the case that secure configurations should be the default baseline for products and also that the complexity of security configurations should not be a customer problem.
They draw analogies to seatbelts being included in new cars, which aligns with examples given before regarding security/safety and regulation of products and suppliers of software to other industries such as manufacturing and automobiles. This analogy is also often cited when discussing Software Liability, a concept that draws up a lot of anxiety among technology producers/suppliers and is discussed by Jim Dempsey in a Lawfare blog titled “Cybersecurity’s Third Rail - Software Liability”.
For examples of services and technologies that aren’t Secure-by-Default, in the cloud-native context, one example is Amazon Web Services (AWS), who just recently in December 2022 announced that S3 storage buckets will block public access for new buckets starting in April 2023, this coming 17 years after the S3 service was introduced and a slew of prolific cloud data leaks that exposed millions of records of sensitive data such as tax records, PII, financial data , defense industrial base (DIB) systems and more.
Another example is the wildly popular Cloud-native Container Orchestrator Kubernetes, which comes with many insecure or vulnerable default configurations, such as a lack of network segmentation, control and secrets management, which Control Plane CEO/Founder Andrew Martin (also the author of the book Hacking Kubernetes, which I recommend checking out) discusses on a recent episode of the Cloud Security Podcast. Kubernetes is a bit of a unique case, since it is an OSS project, but that said, it gets offered by vendors in their various flavors, as well as by Cloud Service Providers (CSP)’s as a managed service offering too, such as AWS’ Elastic Kubernetes Service (EKS).
These are obviously enterprise IT examples, but if enterprises are falling victim to insecure default configurations, how do you think everyday citizens with little to no cybersecurity expertise will fair with insecure defaults? Hint: We already know and it’s not pretty.
That said, there of course is something to be said about the responsible and safe use of products, including software, by consumers. The same applies to countless other examples, such as automobiles, food, pharmaceuticals and so on. That doesn’t mean that software/technology products cannot be made more secure by default though.
Obviously there is a dichotomy and challenge between developer velocity, ease of use, convenience and secure default configurations and as an industry we’ve generally defaulted (pun intended) for the former versus the latter and often to the detriment of customers/consumers, millions of sensitive records and arguably society.
Recommendations made for Software Manufacturers
Moving on from defining and initially discussing Secure-by-Design/Default, the document moves to making recommendations for software manufacturers (often also referred to as software suppliers).
These recommendations include “Software Product Security Principles”, such as:
Not having the burden of security fall solely on the customer/consumer
Embracing radical transparency and accountability
Building organizational structure and leadership to help achieve the secure software goals (a security culture starts at the top)
To enable these recommendations the guidance emphasizes several operational tactics, such as:
Convene routine meetings with company executive leadership to drive the importance of Secure-by-Design/Default approaches in the org
Operate around the importance of software security to business success
Using a tailored Threat Model during development. (For Threat Modeling resources I recommend following leaders such as Adam Shostack, Robert Hurlbut and also checking out resources such as the Threat Modeling Manifesto)
The guidance goes further and discusses Secure-by-Design/Default Tactics, which we will touch on below.
Secure-by-Design Tactics
For Secure-by-Design Tactics the guidance draws on sources such as NIST’s 800-218 Secure Software Development Framework (SSDF) which is quickly becoming a requirement for software vendors selling to the Federal space, and something they will need to self-attest to aligning with. I wrote extensively about this emerging requirement from the Office of Management and Budget (OMB) memo 22-18, along with SSDF here.
Some of the tactics the guidance cites include:
Memory safe programming languages, which is being advocated by others such as the Office of the National Cyber Director (OCND) and the National Security Agency (NSA) in their “Software Memory Safety” publication.
Secure Hardware Foundation
Secure Software Components (hello Software Supply Chain Security)
Web Template Frameworks
Static and Dynamic Application Security Testing (SAST/DAST)
Vulnerability Disclosure Programs (I discuss VDP’s and Product Security Incident Response Teams (PSIRT)’s here)
Software Bill of Materials (SBOM) - which has gotten tremendous attention via NTIA, CISA and industry alike. I write about prominent SBOM formats and the role of SBOM’s at CSO Online here (along with focusing on it in my upcoming book Software Transparency from Wiley here)
Defense-in-Depth
And several others
Secure-by-Default Tactics
In this section the guidance provides tactics that software manufactures should strive to ensure their programs align with, such as:
Eliminate default passwords and mandating MFA (note compromised credentials are involved in 60-80% of incidents, depending on sources you reference such as Verizon’s Data Breach Investigations Report (DBIR))
Single Sign On (SSO)
Secure Logging
Tracking and Reducing Hardening Guidance size
Considering the user experience consequences of security settings
And others.
The last two listed are a refreshing take that aligns with some who have been increasingly calling for the user experience and human factors to be considered in cybersecurity.
The guidance also recognizes that it is critical to get customer input when it comes to balancing operational and security settings, per my comments above about developer velocity and functionality and the same applies for end users who are everyday citizens using products and wanting to have an enjoyable experience. This is where we start to see an increased synergy between Human Centered Design (HCD) and Cybersecurity, a disappointingly overlooked relationship historically in cybersecurity.
Hardening vs Loosening Guides
Another interesting paradigm shift the guidance advocates for is shifting from Hardening Guides to Loosening Guides.
Anyone who has worked in IT/Cybersecurity, especially in regulated environments (e.g. Federal/DoD) is used to working with hardening guides such as Center for Internet Security (CIS) Benchmarks or DISA Security Technical Implementation Guides (STIG)’s.
The way this generally plays out is you receive a product or software from a supplier/vendor and then need to follow the vendors guidance along with guidance produced from third-parties such as those mentioned above to “lock down” or make the software secure, rather than receiving a Secure-by-Default product/software and needing to loosen the configurations to allow desired functionality.
This of course is a delicate balance again between customer functionality, ease-of-use and security. Any of my DoD security peers may remember the days of DoD/DISA’s “Gold Disk” days, where a tool could be used to help harden systems, but inadvertently would make them non-functional and a brick, much to the dismay of those who needed to actually use the systems, and the problem isn’t gone, with the notable example of the viral “Fix Our Computers” letter from DoD AI leader Michael Kanaan, who points out it takes hours to do simple tasks on DoD endpoints, in part due to security software and configurations that exhaust the devices resources.
There is a longstanding challenge between making products and software functional and secure, with the two often impacting one another.
Much like the push for Platform Engineering and attempts to abstract underlying infrastructure complexity from Developers, we should be striving to abstract security from users and consumers to the extent practical to allow users to use software securely without creating a cumbersome customer experience or diminishing the value of using the product or software to begin with, which of course is easier said than done.
Recommendations for Customers
Last up in the publication is guidance for customers themselves. The guidance recommends organizations hold their technology suppliers for the security outcomes of their products as well as prioritizing purchasing Secure-by-Design/Default products.
It recommends IT/Procurement/Security teams be involved in the assessment of software prior to purchase and actually empowering security teams to push back if needed. In some organizations this does occur, but ultimately the business owns the risk and security is there to empower the business to make risk-informed decisions around its consumption and use of technology, the key is establishing rapport to successfully have the recommendations heard.
Another challenge is the ability of consumers to hold suppliers accountable varies depending on the size of the consumer and supplier in the scenario.
Hyperscale corporate conglomerates are much less likely to listen to SMB’s who represent a small fraction of their revenue than they are to listen to say, the Federal government, who is one of, if not, the largest purchaser of IT in the world.
This is why we’re seeing efforts such as OMB 22-18 and the requirement for self-attesting (and in some cases 3rd party assessments) to software suppliers aligning with practices in NIST’s SSDF when selling to the Federal government. It is an attempt by the Federal government to utilize its massive purchasing power to drive systemic changes and buy-down risk.
Another notable effort on this front is NIST’s Cybersecurity Labeling for Consumers: Internet of Things (IoT) Devices and Software, along with traction in international examples such as the EU Cyber Resilience Act. Measures like these are critical to help end-users, consumers and citizens who don’t have security teams at their disposable to make risk informed purchases and consumption of products and technologies.
The Elephant in the Room
I would be remiss if I didn’t touch on a topic that some, such as my friend Jacob Horne has already pointed out, and that is that the guidance uses the word “should” 51 times in 11 pages.
There are a lot of things software and technology producers should do, and many of us in cybersecurity have been advocating for them to do so for years, including many of the recommendations cited here.
As Jacob bluntly states:
“The market failure of cybersecurity won’t fix itself on a voluntary basis”
An uncomfortable reality for many in the industry is that it will take the expletive “Regulation” to drive systemic changes in the way software and technology vendors produce their products, and this is due to the fact that cybersecurity can be categorized as a market failure.
While National Cybersecurity Strategies of the past have danced around the concept, the latest NCS plainly states an objective as “Shift Liability for Insecure Software Products and Services” in Objective 3.3, which emphasizes the need to shift liability onto those entities that fail to take reasonable precautions to secure software. e.g. our least-cost avoider discussion earlier in this article.
Of course even the most advanced cybersecurity programs and product teams can’t negate all malicious activity, but a ounce of prevention is worth a pound of cure, as they say, and this is where concepts such as safe harbor come into play.
The challenge here, and what remains to be seen is how the conversation around software liability plays out. What will be determined as “reasonable precautions” and what safe habor measures will be put into place to protect suppliers who take the appropriate precautions so that they are incentivized to do so but also protected should they still have an incident due to factors beyond the required precautions. Most importantly, what financial and regulatory consequences are we prepared to take when it comes to those who don’t, because the regulatory ramifications, whether financial or otherwise will be the motivating factor that largely determines if many of the “should” do recommendations actually get done.
As they say, you get what you tolerate, so if we tolerate insecure products with insufficient consequences to change behavior, that is what we will continue to get, for better or worse.
Another challenge of this, and that is pointed out by cyber leaders and authors Richard Clarke and Robert Knake in their “Fifth Domain: Defending Our Country, Our Companies, and Ourselves in the Age of Cyber Threats” book, is, if Washington doesn’t act, states will. In the absence of comprehensive Federal cybersecurity regulation, we’re seeing states begin to take their own initiatives, such as in California and New York. We also seeing this manifest on the privacy front, and not just security.
This of course has the unintended consequence of creating a patchwork quilt of cybersecurity regulations and requirements that businesses must contend with, that are often duplicative, contradictory, and costly. This is recognized by the Federal Government in the latest NCS, which calls for the need to “Harmonize and Streamline New and Existing Regulation”.
How this all plays out remains to be seen, but one things for sure..
Our industry and society is now entirely reliant on software that is pervasive, and unfortunately insecure and vulnerable, as are we by extension. Without a significant paradigm shift, this won’t change and the consequences could be devastating.