Bringing Security Out of the Shadows
A look at behaviors and patterns that lead to Shadow usage of technologies and how security hinders itself, with AI as the latest example
Welcome to Resilient Cyber!
If you’re interested in FREE content around AppSec, DevSecOps, Software Supply Chain and more, be sure to hit the “Subscribe” button below.
Join 5,000+ other readers ensuring a more secure digital ecosystem.
If you’re interested in Vulnerability Management, you can check out my book “Effective Vulnerability Management: Managing Risk in the Vulnerable Digital Ecosystem” on Amazon. It is focused on infusing efficiency into risk mitigation practices by optimizing resource use with the latest best practices in vulnerability management.
By now, we’re all well aware of the prolific growth of AI adoption. In fact, it has outpaced previous technological waves such as the Internet, Mobile and Cloud before it.
That said, there’s a common term taking hold in the cybersecurity industry that by now should be all too familiar
What is “Shadow” Usage?
First off, what exactly is “Shadow” usage of a specific technology?
It is most commonly referred to the use of technology, software or service without the knowledge, oversight, collaboration or governance of Security, and often, even the IT team within an organization.
This has become even more commonplace with the evolution of technologies award from physical hardware based environments to digital, dynamic, ephemeral, and distributed. With employees working in geographically distributed environments, many times with their own devices in terms of laptops or phones, in a “Bring Your Own Device” (BYOD) model, with access to corporate and organizational data, systems, technologies and more.
People aren’t carrying in a physical server or piece of equipment through the front door raising eyebrows, they’re engaging with software, services and technologies through their endpoint, and often they’re doing it while physically working from home, even further disconnected from the organization.
It is simply far too easy to spin up usage of a new service or software, engage the latest SaaS provider, start using an innovative capability to help you collaborate or be more productive - employees know it and they move out accordingly.
AI as the Latest Example
I was recently reading an article on CSO Online titled “Unauthorized AI is eating your company data, thanks to your employees” and it had some metrics that on the surface should be alarming, but as a multi-decade seasoned security professional all I could think of was, *yawn*, I’ve seen this story before, several times now.
Some key takeaways from the article include:
Employees at many organizations are engaging in widespread use of unauthorized AI models behind the backs of their CIO’s and CIS’s, according to a recent study
Employees sharing all sorts of sensitive data (e.g. legal, source code, employee information, IP etc) with external non-corporate AI tools and services
74% of ChatGPT use at work occurring via non-corporate accounts
94% of workplace’s using Google AI’s Gemini and Bard from non-corporate accounts
The amount of data put into all AI tools seeing a nearly five-fold increase between March 2023 and March 2024
These figures are all indeed concerning, presenting various risks to organizations from legal, regulatory, competitive concerns, inadvertent and unauthorized data disclosure cybersecurity risks and more.
However, this article isn’t about that, and I’ve already covered AI Risks in various articles such as:
Furthermore, I would be remiss if I didn’t mention my friend Walter Haydock’s awesome “Deploy Securely” Substack which over the past year has become a go-to resource for AI Security discussions and content.
This article instead will focus on some of the common causes for “Shadow” usage of technologies and the challenges it poses for security teams and organizational risk.
You see, the latest “Shadow AI” isn’t a unique phenomenon and in fact this is a pattern we have seen play out with all major previous technological waves such as the Internet, Mobile, Cloud, and SaaS.
Security has both some characteristics about itself as a domain and community, as well as cultural behaviors and norms that lead to this pattern, and every time, it leaves us playing catch up with our business and mission peers.
So let’s take a look at some of those.
Causes
Below we will discuss some of the characteristics of the cybersecurity career field and domain as a whole, as well as some ingrained behaviors and habits that leave us perpetually living in the shadows.
Reactionary Nature
Security, by its very nature is almost always reactionary.
There is a reason that terms like “we need to build security in versus bolt it on” or the new manta of needing “Secure-by-Design” systems and software.
Despite the recent push for Secure-by-Design by leaders such as CISA, as I point out in the article titled “The Elusive Built-in not Bolted-on”, building security into digital systems and software is a concept that was advocated for over 50 years ago in the “Ware Report”.
Yet here we are, advocating for an industry shift, 50+ years later, on a concept that is decades old.
Another factor causing the reactionary nature of cybersecurity is that the real innovators are users, and attackers. They continuously find ways to circumvent, sidestep and avoid cybersecurity policies, processes, tools and requirements.
For users, it is to make their lives easer, be more productive and do whatever it is they are looking to do in the moment.
For attackers, it is to exploit new technologies, systems, software and environments for their malicious intent, which is overwhelmingly financially driven, per sources such as the latest Verizon DBIR.
This leaves security professionals always reacting to whatever users and attackers are now doing. Their behaviors, techniques, patterns and use of technologies.
Lastly, the same goes for regulatory measures and policies.
Cybersecurity is inevitably a compliance-driven activity, and regulators and policy makers often live in an analog world, not moving at the pace of the digital technologies they govern, and often waiting to see how something unfolds and problems materialize before changing regulations and compliance requirements to address the problems.
This makes sense, because we want the unconstrained free market to prosper and thrive, and we don’t want to strangle innovations before they are underway, and the same is playing out for AI right now.
That said, there is also potential for forward thinking behavior among policy makers, and we indeed see some of that as well with the AI Executive Order and various activities from Governments around the world, including in the EU focused on AI and its potential misuse or abuse.
Risk Aversion and Skepticism
Another fundamental characteristic of cybersecurity professionals and the industry is a general sense of risk aversion and skepticism.
Unlike our peers who may be looking at new innovations and technologies and wondering how they can be exploited to enhance market share, productivity, revenue, competitiveness and more, security practitioners generally think different.
We’re often looking at technologies and wondering how can they be misused, abused and exploited.
In fact, a fundamental aspect of cybersecurity is conducting something known as Threat Modeling, which involves asking:
What are working on?
What can go wrong?
What are we going to do about it?
Did we do a good job?
That question “What can go wrong?” permeates in security practitioners minds constantly.
While we’re often seeking pause and asking what can go wrong if we do this, the business is often moving out, and asking what opportunities will we miss if we don’t do this?
We’re afraid of opening up ourselves to risks by moving too fast, adopting too soon, or not exercising enough caution.
The business and mission is afraid of trailing competitors, not being innovative and failing to capture or retain market share or competitive advantage (including among Nation States, which are often now overwhelmingly using software).
Cumbersome Processes and Compliance
Cybersecurity is often process and compliance driven as discussed above.
This manifests in many ways, from everyone’s favorite, security questionnaires to a myriad of compliance frameworks and processes such as FedRAMP, SOC2, ISO, HITRUST, HIPAA, RMF, NIST 800-171, CMMC and the list goes on and on (literally).
It’s so bad in fact that the Office of the National Cyber Director (ONCD) is actively soliciting approaches to lead to “Regulatory Harmonization”.
They collected thousands of public comments, all with recurring themes such as:
Costly
Cumbersome
Duplicative
Confusing
Just to name a few of the adjectives of adoration levied on ONCD when it comes to cybersecurity and compliance.
These cumbersome processes and compliance requirements facilitate the rampant usage of “unauthorized” technologies.
Business peers and organizations bemoan the friction and bottleneck that has grown to be known as cybersecurity/compliance and they often sidestep the office of cybersecurity and cybersecurity teams in general.
In some cases it may be falsely claiming ignorance of existing organizational policies and requirements, and in others it is simply denying usage or engaging security to seek “permission”.
As the saying goes, it is easy to ask forgiveness than permission.
The business and mission generally view compliance and security as an impediment to their outcomes and a drain on the organization.
We see it in the commercial space, as well as Government, where the “Authority to Operate” (ATO) is called the “biggest impediment” to the DoD/Governments ability to successfully achieve innovation and digital transformation.
The same feelings exist across the commercial space and the business, engineering and development peers generally feel mutually about security and compliance.
Some forward thinking security peers of mine, such as Kelly Shortridge take pleasure in calling out the ridiculousness of our industries behavior of “Security Theater”. Below is a great image from Kelly demonstrating the different between Security Theater and actual Resilience, which is what security professionals should be focused on.
Longstanding Relationship Fractures
All of the above behaviors, characteristics and dynamics of the career field of cybersecurity contrasted with broader IT, and more so Engineering, Development and the Business/Mission have led to longstanding fractured relationships.
It has now become commonplace to hear quips such as “security if the office of no”, because our peers who have interacted with us are so used to us shooting down most of their ideas and initiatives, or like sand in a engine, grinding things to a halt and making them unproductive, or outright dysfunctional.
This has created a situation where our Development, Engineering and Business/Mission peers generally dread interacting with us.
They view us as painful to interact with, frustrating, generally a group who injects toil and difficulty into their professional lives and limits their ability to be productive and achieve the outcomes they are pursuing.
This was articulated very well in a recent article from Rami McCarthy titled “Don’t Security Engineer Asymmetric Workloads”.
Despite the push for DevSecOps and “breaking down silos” between Development and Security (and operations, but we will leave them out of this), our behavior has actually bolstered silos, as we inject alphabet soup of security tools onto Development teams, and dump massive vulnerability findings on them with little to no context and let them drown in the misery and toil.
How to fix it?
Using some of the examples discussed above that have led to the rampant Shadow usage of technologies, let’s discuss how Security can get out of the shadows and avoid this repeated pattern of being late to the party and trying to play catch up.
Initially is our reactionary nature in cybersecurity. Rather than sitting back watching the business eagerly adopt new technologies while we sit back, wringing our hands wondering what can go wrong and how can things be exploited, what if instead we were early adopters?
For example, as we see organizations, led by security, looking to outright ban the use of technologies such as AI, what if we established guardrails and safeguards and rolled up our sleeves get experienced and familiar with the technologies, and beyond just reading about the said technology but actually experimenting with and using it?
On the topic of risk aversion and skepticism, what if instead of sitting back and worrying about what can go wrong, what if we asked what happens if we are reluctant to adopt new innovative technologies, get familiar with them and become competent in their secure use, and instead let adversaries outpace us with their adoption and innovative uses of technologies? (e.g. Cloud, AI and more) to be more effective in their goals of exploitation, contrasted with ours of defense.
In the same vein, if we’re claiming to be a “business enabler”, what if security equally stepped back and asked ourselves what negative impacts to the business is there if we impede, delay, and block their adoption and use of innovative technologies?
Do we not realize that security isn’t the business, it isn’t what brings revenue to the organization, and without the businesses ability to continue to thrive, our very own roles by extension would cease to exist - there would be no business or mission to protect and secure.
I won’t belabor the cumbersome processes and compliance aspect too much, given the widespread public dialogue on the topic and the open effort by the ONCD to try and rectify the issue.
Lastly, on the topic of relationship fractures, Rami’s article again has some great recommendations, such as starting with transparency, setting clear expectations on timelines and levels of effort, and helping teams own their risks.
What if we were an invested collaborator with Engineering, Development and the Business and actually integrated with their ways of working, with shared mutually defined goals and objectives for the business, of which we all rely on. Rather than just throwing massive workloads and toil onto them, often driven by speculative fears of what could happen.
What if we asked, what is the risk if we don’t do this, rather than what is the risk if we do?
This isn’t a call to throw caution to the wind, let the business/mission run rampant with risky behavior and look to implement no governance or empower the business to make risk informed decisions.
It is however, an opportunity to look at these repeated patterns of finding ourselves in the shadow and bringing our behaviors into the light and asking if this is truly the most effective way of behaving if our goal is actually reducing organizational and societal risk.
Whether we will have the maturity and humility to do that as a career field however remains to be seen.