Security
Headlines
HeadlinesLatestCVEs

Headline

Patch management needs a revolution, part 5: How open source and transparency can force positive change

This is the fifth and final part of Vincent Danen’s “Patch management needs a revolution” series.Patch management needs a revolution, part 1: Surveying cybersecurity’s lineagePatch management needs a revolution, part 2: The flood of vulnerabilitiesPatch management needs a revolution, part 3: Vulnerability scores and the concept of trustPatch management needs a revolution, part 4: Sane patching is safe patching is selective patchingThere is an intersection between “compliance” and “security” but it’s wise to realize that compliance does not equal security. Compliance, when don

Red Hat Blog
#vulnerability#apple#google#linux#red_hat#perl#auth

This is the fifth and final part of Vincent Danen’s “Patch management needs a revolution” series.

  • Patch management needs a revolution, part 1: Surveying cybersecurity’s lineage
  • Patch management needs a revolution, part 2: The flood of vulnerabilities
  • Patch management needs a revolution, part 3: Vulnerability scores and the concept of trust
  • Patch management needs a revolution, part 4: Sane patching is safe patching is selective patching

There is an intersection between “compliance” and “security” but it’s wise to realize that compliance does not equal security. Compliance, when done well, can be a driving force in improving the processes and technologies in use in an environment to improve overall security posture. But, a focus on “check the box” security, which is compliance dressed up like security without a fundamental understanding of how security practices and processes should be applied to decrease risk in an environment, is detrimental. This is analogous to someone deciding that rotating passwords was a good idea and then 26 years later coming back and saying “well, that didn’t help.” It was expensive, confusing, and it didn’t make any material difference.

The same is true with patching. Patch those things for which patches are available, absolutely. Those patches were created for a reason, usually because the vendor felt it was risky enough to warrant the code change (which in itself introduces risk). But those things for which no patch is available? Here is where one must look at the risk: if rated Critical or Important, absolutely look for those patches! If Moderate or Low, which are typically less likely to be exploited and if exploited less likely to be severe, pursuing those patches does not measurably reduce risk in your environment. Time and effort would be better spent on things that substantially reduce risk. We discussed this cost to patch in the prior article of this series.

It’s also worth noting that this demand for all patches is a uniquely open source phenomenon. These same demands cannot be made of proprietary vendors who have no obligation to disclose vulnerabilities they do not intend to fix. This is not the case for open source vendors who cannot hide vulnerabilities behind opaque walls and inaccessible source code. Ironically, the open source benefit of transparency is the paradox that open source vendors grapple with. On the one hand, transparency and source code availability affords a number of benefits, such as being able to find vulnerabilities faster, mitigate them without a patch being available, correct software deficiencies yourself, and so forth. On the other hand, for those who insist that “compliance” is security, every vulnerability has a box beside it that needs to be ticked, and there’s no hiding it. Proprietary vendors have no need to describe why an issue isn’t overly risky – they just don’t disclose it. Open source vendors are not afforded that same privilege.

At the end of the day, open source transparency extends more power to end users, even when considering security concerns, than proprietary alternatives. But in order to truly reap those benefits, end users must have a better understanding of risk, or patching becomes an expensive rat race for vendors who are producing the fixes, and for end users who are consuming, testing, and deploying those fixes.

At some point we as an industry, and particularly those of us who are security practitioners, have to realize that the advice we’ve given for years is simply bad advice. It was easy enough to patch everything when there were small numbers of things to patch and software was, by today’s standards, quite straightforward and uncomplicated. It is exceptionally more difficult, and risky, to patch tens of thousands of things and introduce significant change in much more complex environments.

It has been discussed a number of times, but the old ways of doing things are hard to shake. Whether it’s the sunk cost fallacy (“we’ve always done it this way and we’re invested in it”) or sheer misunderstanding of risk, we continue to prioritize the things that don’t matter and don’t focus enough on the things that do. And the result? Breaches and significant security events continue to rise, despite those efforts. What we continue to try to do doesn’t scale, and frankly it isn’t working.

The solution is when we know better, we should do better. We have the information and knowledge now to deduce that a risk-based approach is the best approach. And we’re not alone, a simple Google search reveals hundreds of thousands of results, most from the last few years, including frustration from upstream communities. Others are discussing it and challenging the old ways of thinking, which is good! Using the crude illustration from the last article in this series, it doesn’t make sense to spend $1.6M when $295k will afford the exact same risk reduction. And even then, that risk reduction is minuscule when accounting for the whole picture. Take the rest of that money and invest it where it makes sense, such as tooling and training that helps. After all, in nearly every other technological aspect of security, we’ve moved on from failed techniques, yet we (as an industry) persist in this one particularly expensive and faulty belief.

One recent example of this change is that in May of last year, Google made the decision to no longer assign CVEs to Low or Moderate issues. They affirm they will continue to focus on those more severe vulnerabilities that are an actual risk to the ecosystem.

Now is the time to move on. It’s time to realize that not all vulnerabilities matter and the risk of having a list of CVEs with no patches to apply is less than the risk of not managing our security perimeters, not having appropriate separation of duties, not following principles of least privilege and zero trust, and not employing appropriate technologies to disrupt and detect determined attackers.

There are a number of technology areas to invest in. Secure configurations, logging, monitoring, employing techniques like Lockheed Martin’s cyber kill chain, employing zero trust strategies, using identity and authentication properly, and so forth. These will yield more meaningful and measurable results, if the goal is truly risk reduction. This is where the bulk of organizations experiencing a breach appear to be lacking – the adoption of these crucial technologies and techniques.

Vincent Danen lives in Canada and is the Vice President of Product Security at Red Hat. He joined Red Hat in 2009 and has been working in the security field, specifically around Linux, operating security and vulnerability management, for over 20 years.

Read full bio

Red Hat Blog: Latest News

Managed Identity and Workload Identity support in Azure Red Hat OpenShift