Security
Headlines
HeadlinesLatestCVEs

Headline

Do software security features matter in the world of vulnerability remediation?

It’s easy to tick the checkboxes on a compliance checklist with the mindset that your system is protected and not exposed to risk. If it is this simple, why do we continue to invest billions of dollars in developing security controls and software development lifecycle (SDL) practices that help harden software and minimize risk? What is the value in configuring services, tuning firewalls, and enforcing access policies only to accept a risk rating for a vulnerability directly mapped to a base score that seemingly ignores all the work done?This contradictory model of focusing on security featur

Red Hat Blog
#vulnerability#ios#linux#red_hat#auth

It’s easy to tick the checkboxes on a compliance checklist with the mindset that your system is protected and not exposed to risk. If it is this simple, why do we continue to invest billions of dollars in developing security controls and software development lifecycle (SDL) practices that help harden software and minimize risk? What is the value in configuring services, tuning firewalls, and enforcing access policies only to accept a risk rating for a vulnerability directly mapped to a base score that seemingly ignores all the work done?

This contradictory model of focusing on security features yet discounting those security features has, over time, gone unchecked and unchallenged for several reasons. For many, that reason is likely the fear of what’s at risk. Others may be victims of policies that are built to ease compliance. Regardless of the reason, almost everyone we speak to reiterates that they wish their circumstance was different, and they truly understand the value of a proper risk-based impact assessment. So how did we get here?

Aging vulnerability standards need continuous evolution

In 2005, the Internet Category of Attacks Toolkit (ICAT) was rebranded as the National Vulnerability Database (NVD) with the primary purpose of standardizing how vulnerability data is shared as an initial reference for the US government. I doubt anyone would argue against the impact that the NVD has had in providing easily accessible and standardized data. In an effort to provide the US federal civilian government with a simple risk classification for vulnerabilities, the NVD began mapping Common Vulnerability Scoring System (CVSS) base scores with risk levels and has maintained this method ever since. In the early 2000s, computer systems were still relatively simple. While this mapping of risk levels had merit in 2005, the same is not true today.

Here are two scenarios to illustrate this point.

  1. Suppose that component-x.y.z, developed by the open source community, is used by three different software vendors. Each vendor then builds or uses the component in a different way. One vendor may disable a key feature at compile time, while another may limit usage with configuration. A third vendor may disable access to that feature. Why would it then make sense to expect a vulnerability to affect the Confidentiality, Integrity, and Availability (CIA) of a system in a similar manner across each vendor offering?
  2. In late 2000, we saw an expansion of Linux-centric security tooling emerge, which were, in part, built to address early iterations of the Federal Information Processing Standards (FIPS. The National Security Agency (NSA) introduced SELinux and the security feature swing started in earnest, with Linux security modules, two-factor authentication and packet filtering enhancements, as well as new tools like fapolicyd and roles like realmd system. Vulnerable code is still vulnerable code; however, the conditions needed for that vulnerable code to be executed are subject to the security and mitigation controls that are in place. These new controls limited the exploitability of that component and thus changed the risk levels.

When I joined Red Hat back in 2005, I remember questioning the business model for open source software when it was available for free. Why pay a subscription fee? The answer is trust and accountability. Who’s responsible for the maintenance? Who’s responsible for the security posture of the product or service?

The answer to both those questions is the vendor. The vendor is accountable for an impact assessment of vulnerabilities based on the default builds and configurations provided in the subscription. It is the job of a Product Security Incident Response Team (PSIRT) to identify and assess impact. We are accountable to customers and these assessments are intended to help prioritize and minimize security risk. This is part of the value of software subscription. There are many vendors just like Red Hat who do impact assessments on vulnerabilities and do them well. These impact assessments aren’t just intended for customers, but are how vendor engineering teams prioritize as well. If a vendor CVE rating is moderate, per lifecycle policies, that CVE is not likely to be fixed in product versions that are in maintenance.

Break the contradiction

In order to break the contradictory model we’re in, vendor assessments must be trusted over the ease of broad impact assessments based on merely the base score, which really only represents the base characteristics of the vulnerability. This does not imply that the vendor rating is always applicable. Most users do not run a default configuration for everything and some opt to add additional applications and libraries or replace what was provided. Users who create these scenarios accept the accountability for the impact assessment.

A good security practitioner, namely one who is accountable for prioritizing vulnerability remediation in a Computer Security Incident Response Team (CSIRT) environment, should use the vendor base CVSS as intended and leverage the additional temporal and environmental scores. Why? Because relying solely on the base score wastes company focus and resources, ignores hardening and systems / data risk, and could mask a greater impact. This is such an important point that first.org highlights this in the CVSS user guide.

Minimizing risk and exposure to vulnerabilities must remain a multi-pronged approach. Security teams should continue to scan for and remediate vulnerability findings. However, this effort must not compete with or devalue the investment in software features that effectively harden systems and minimize exposure. There is no easy button.

Enter keywords here to search blogs

UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

Automation

The latest on IT automation for tech, teams, and environments

Security

The latest on how we reduce risks across environments and technologies

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure

The latest on the world’s leading enterprise Linux platform

Applications

Inside our solutions to the toughest application challenges

Original shows

Entertaining stories from the makers and leaders in enterprise tech

Red Hat Blog: Latest News

Automatically acquire and renew certificates using mod_md and Automated Certificate Management Environment (ACME) in Identity Management (IdM)