Resilient Cyber Newsletter #4
Is Cybersecurity "full", or full of gatekeepers? Generative AI Misuse, LLM's in Cybersecurity, Explosive CVE growth and The EPSS User Guide
A lot of great resources this week, including Supreme Court Decisions, AI misuse and threats and vulnerability management guides, specifically for leveraging the Exploit Prediction Scoring System (EPSS).
So let’s dive in!
Interested in sponsoring an issue of Resilient Cyber?
This includes reaching over 6,000 subscribers, ranging from Developers, Engineers, Architects, CISO’s/Security Leaders and Business Executives
Reach out below!
Cybersecurity Leadership
Cyber Insurance Rates Fall as Businesses Improve Security (wat?)
An interesting story of Reuters who cite declining insurance rates as companies are becoming more proficient at mitigating the impact of cybersecurity incidents.
The story cites various factors like organizations implementing basic fundamental security controls like MFA and others, as well as insurers growing appetite to provide cybersecurity coverage as contributing to the declining cost of premiums for cybersecurity insurance.
Oddly enough the story also discusses a nearly 20% rise in ransomware incidents in the first part of 2024 compared to 2023.
Is Cybersecurity “full”?
This week Stuart Mitchell, Founder of Recruiting firm Hampton North made a post about the notion that “cybersecurity is full”.
It was predicated on the notion that the security career field is full. Stuart makes the case that security labor market had indeed changed, with roles getting increasingly competitive and more and more folks looking to break into the career field.
The post is full of great comments from folks such as Jerich Beason citing we have an over-abundance of entry level folks looking to break into the field and needing to widen the path to entering the field (e.g. degree and education requirements for example).
That said, this discussion continues to heat up due to charlatans who have made a niche out of promising “six figure salaries” on LinkedIn, or after a “bootcamp” or achieving a specific certification for example.
On one hand you will hear people call out “gatekeeping” by refusing to let anyone and everyone into the field, and others pointing out that it isn’t unfair to ask for folks to build their chops in underlying IT or through demonstrating competency through forms of education, self-development and proficiency.
The truth is that cybersecurity can be hard work, and it may not be for everyone. That doesn’t mean those who want to pursue the career field shouldn’t be given an opportunity, but it also doesn’t mean that we shouldn’t have any sort of standards for the career field and the practitioners operating in it either.
Cybersecurity Gatekeeping and the Value of Non-Security Professions
For a timely discussion around “gatekeeping” within cybersecurity, highly respected security leader Helen Patton published a piece discussion how cybersecurity tends to conduct a lot of gatekeeping when it comes to people without traditional security backgrounds.
She emphasizes the value that people from broader career fields such as electrical engineering, teaching, psychology and more can bring to the career field of cybersecurity, bringing unique perspectives, experiences and new ways of tackling old problems or new problems.
Insight into Cybersecurity spending, budget chaos and tool sprawl
This article from HelpNet Security sheds light onto cybersecurity spending, citing that 63% of organizations with more than 5,000 employees have an average cybersecurity budget of $26 million USD.
However, it discusses challenges around a lack of standardization when it comes to cybersecurity budgets and the fact that organizations are struggling with making their investments effective, due to tool sprawl, with 40% of respondents citing these challenges. We know this is a common issue, with security tools taking time to implement, configure, optimize and actually monitor.
Additionally, security tools ultimately serve as part of your broader attack surface and when not governed properly, can introduce more risk than they reduce, and also lead to cognitive overload and stress on the security teams tasked with managing them.
U.S. Supreme Court ruling will likely cause cyber regulation chaos
In the past several years we’ve seen Cyber regulation evolve. Regulations by FCC, SEC and CISA just to name a few.
Its grown so complex and cumbersome that the Office of the National Cyber Director, The White House published an RFI calling for industry input on “regulatory harmonization”
However, this latest ruling from the Supreme Court may have just thrown cyber regulations into further chaos and uncertainty.
The article discusses how in a recent U.S. Supreme Court ruling, a 40 year standing regulatory law was reversed, gutting a legal precedent known as “Chevron deference”. It has ruled that courts, not regulatory agencies will be the final decision makers when it comes to governing congressional laws, which the article explains can complicate literally thousands of existing federal regulations, including many specific to cybersecurity.
Industry will continue to fumble through the maze of cybersecurity regulations and requirements, which are often duplicative, contradictory and costly, and this latest ruling brings even further complexity and uncertainty into the fold.
AI
Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data
Google Deepmind recently released an excellent whitepaper discussing AI misuse and laying out a taxonomy of tactics and insights from real-world data and incidents involving GenAI systems.
They looked at over 200 incidents from January 2023 until March 2024 to provide the insights, which is critical given this is a period of rapid GenAI adoption, and with ample examples to pull from.
They didn’t just look at the attack patterns or incidents themselves but also the potential motivations of the attackers and various abuse cases across modalities such as video, audio, images and text.
Below is a table of misuse tactics to exploit GenAI systems, including tactics, definitions and real-world examples which include hyperlinks in the paper itself.
I definitely recommend checking out this paper to get familiar with the common attack types targeting GenAI and real-world incidents where the attacks have been successful or at least have raised awareness around the risks and potential threats.
LLM’s in Cybersecurity: Threats, Exposures and Mitigations
I stumbled across this amazing free book (eBook) from publisher Springer that is a very comprehensive resource when it comes to LLM’s in Cyber. It covers a wide range of key topics such as:
History and fundamentals of LLM’s
LLM threats, risks and attack vectors
LLM use cases in Cyber
Funding in the LLM ecosystem
The evolving policy and regulatory landscape
This is an amazing resource for the community and one that I have bookmarked and will be working my way through. Big shoutout to Andrei Kucharavy and the other authors for producing this.
As LLM’s continue to be the hottest aspect of AI at the moment in terms of adoption, experimentation and abuse, this is a great resource for security practitioners to dig into.
Driving U.S. Innovation in AI (AI Policy Roadmap from U.S. Senate)
The U.S. Senate recently released a document laying out a tentative roadmap for AI policy. The document discusses how there were three educational campaigns around AI for senators in the summer of 2023, along with classified briefings and then nine bipartisan AI Insight Forums on nine focus areas, as below:
1. Inaugural Forum
2. Supporting U.S. Innovation in AI
3. AI and the Workforce
4. High Impact Uses of AI
5. Elections and Democracy
6. Privacy and Liability
7. Transparency, Explainability, Intellectual Property, and Copyright 8. Safeguarding Against AI Risks
9. National Security
The document goes on to discuss a variety of key topics such as supporting U.S. Innovation in AI, workforce development security concerns such as Safeguarding Against AI Risks and National Security.
This is a good high-level read for those interested in potential focus areas for the U.S. Government and policymakers when it comes to AI and its continued rapid adoption.
OpenAI was hacked, revealing internal secrets and raising national security concerns — year-old breach wasn't reported to the public
On July 4th, the New York Times (NYT) broke a story on how OpenAI experienced a security incident in 2023 but it impacted internal communication systems among employees and not customer data or their AI infrastructure.
However, as the story notes, it has raised concerns around how serious the company takes cybersecurity and cites quotes from former employees who question the companies security posture when it comes to protecting against nation states such as China, as the company marches towards Artificial General Intelligence (AGI).
The story is a key example of a company experiencing rapid growth and success and likely being reluctant to publicize a security breach due to concerns on impacting the companies growth, reputation and of course revenue.
AppSec, Supply Chain and Vulnerability Management
Explosive Vulnerability Growth
As has been the trend for the last several years we’re seeing the Year-over-Year growth of vulnerabilities (as counted by Common Vulnerabilities and Enumerations (CVE)’s) continue to grow exponentially.
Vulnerability Researcher Jerry Gamblin (who I strongly recommend following) published the vulnerability growth for the first half of 2024. Here is a quick recap:
Total Number of CVE’s: 20,910
Average CVE’s Per Day: 114.89
Average CVSS Score: 6.65
YoY Growth: 36.45% or +5,586 (15,324 CVE’s at this point in 2023)
Now ask yourself, has your team gotten 36%~ more effective at inventorying, triaging, prioritizing and remediating vulnerabilities?
For the outsized majority of the industry of course the answer is a resounding NO.
Malicious actors know this, and I’ve written about the challenge extensively in articles such as:
Along with my book “Effective Vulnerability Management” of course.
Industry-wide organizations are sitting on vulnerability backlogs in the hundreds of thousands or higher, making light work for attackers to exploit all of the security technical debt sitting around.
Of course organizations can get more effective at Vulnerability Management, and must do so, but software vendors also need to adopt a Secure-by-Design approach to avoid disproportionality passing risk downstream to customers and consumers.
Exploit Prediction Scoring System (EPSS) - The User Guide
Chris Madden, Distinguished Technical Security Engineer with Yahoo recently presented a talk at Security BSides Dublin where he dove into EPSS, providing some of the best coverage of EPSS I have seen to date.
His talk on YouTube can be found here, and he has an amazing slide deck accompanying it that is worth diving into if you’re utilizing EPSS for Vulnerability Management.
Funny enough, I was watching his talk when he brought my name on the screen and asked the audience if they had heard of me, and called me the “Mr. Beast of Vulnerability Management”.
As a father of four, soon to be five kids, I have watched more Mr. Beast clips than I’d like to admit, but I will take it as a compliment, as the guy has a massive reach and is quite successful!
For those unfamiliar, EPSS is run by the same organization that runs the Common Vulnerability Scoring System (CVSS) but by a different Working Group/SIG.
Rather than using blanket base scores such as CVSS, EPSS looks at the probability that a vulnerability will be exploited over the next 30 days, and provides a score. You’re able to use that score to tailor it to your organizations risk tolerance and use it to focus vulnerability remediation efforts accordingly, based on the exploitability of a vulnerability.
This is an amazing resource for the community, especially as most are drowning in vulnerability backlogs that show no signs of slowing down.
If you’re looking for a deeper explanation of EPSS and how it can be used as well, I wrote an article titled “A look at the Exploit Prediction Scoring System (EPSS) 3.0” where I cover the latest version of EPSS at the time of the writing.
Windows: Insecure-by-Design
A provocative article in The Register discussing how Microsoft has struggled with security issues for decades, including the recent APT activity impacting the U.S. Federal Government and many others, with some in Congress calling Microsoft’s products a “National Security Concern”.
The author cites the recent fiasco with Microsoft’s “Recall” feature and potential ill-desired outcomes such as unauthorized data disclosure and more.
Closing Thoughts
The heated debate on the cybersecurity workforce continues. Some arguing that the industry is full and we need to be more rigid with who is allowed in, and call out those preying on folks looking for secure a career field and opportunities. Others argue we need to quit throwing around hyperbolic numbers with regard to job shortages and drop the arbitrary degree, education, certification and experience requirements and have a bigger tent when it comes to what people are allowed into roles and the broader career field.
The reality is that both perspectives have shades of truth. On one hand we know that this is a complex field, where a decent technical foundation or understanding can serve you well and many people have traditionally come into cyber from adjacent fields like networking, IT, help desk, system administration and more. That said, we also know we desperately need talented and passionate individuals to help address the constantly discussed workforce woes and diverse experiences and backgrounds can bring a set of new eyes and new ways of looking at age old problems.
We’re seeing a lot of great resources come out around enabling the secure use of AI, GenAI and LLM’s. That said, incidents impacting industry leaders can easily undermine these efforts if there isn’t transparency to build trust, in systems where many already feel they are opaque.
The vulnerability landscape continues to spiral out of control. Exponential YoY vulnerability counts, challenges with longstanding vulnerability databases leading to several others gaining traction, more vulnerability disclosures and reporting and overall more software. All of it leading to organizations drowning in massive vulnerability backlogs with little to no hope of getting ahead, especially at the risk of slowing down the business.
We’re living in dynamic times, full of excitement, challenges and opportunities.