Security
Headlines
HeadlinesLatestCVEs

Headline

The Rise of Rust, the ‘Viral’ Secure Programming Language That’s Taking Over Tech

Rust makes it impossible to introduce some of the most common security vulnerabilities. And its adoption can’t come soon enough.

Wired
#vulnerability#web#android#mac#google#microsoft#amazon#linux#java#c++#aws#huawei#auth#wifi#ssl

Whether you run IT for a massive organization or simply own a smartphone, you’re intimately familiar with the unending stream of software updates that constantly need to be installed because of bugs and security vulnerabilities. People make mistakes, so code is inevitably going to contain mistakes—you get it. But a growing movement to write software in a language called Rust is gaining momentum because the code is goof-proof in an important way. By design, developers can’t accidentally create the most common types of exploitable security vulnerabilities when they’re coding in Rust, a distinction that could make a huge difference in the daily patch parade and ultimately the world’s baseline cybersecurity.

There are fads in programming languages, and new ones come and go, often without lasting impact. Now 12 years old, Rust took time to mature from the side project of a Mozilla researcher into a robust ecosystem. Meanwhile, the predecessor language C, which is still widely used today, turned 50 this year. But because Rust produces more secure code and, crucially, doesn’t worsen performance to do it, the language has been steadily gaining adherents and now is at a turning point. Microsoft, Google, and Amazon Web Services have all been utilizing Rust since 2019, and the three companies formed the nonprofit Rust Foundation with Mozilla and Huawei in 2020 to sustain and grow the language. And after a couple of years of intensive work, the Linux kernel took its first steps last month to implement Rust support.

“It’s going viral as a language,” says Dave Kleidermacher, vice president of engineering for Android security and privacy. “We’ve been investing in Rust on Android and across Google, and so many engineers are like, ‘How do I start doing this? This is great.’ And Rust just landed for the first time as an officially recognized and accepted language in Linux. So this is not just Android; any system based on Linux now can start to incorporate Rust components.”

Rust is what’s known as a “memory-safe” language because it’s designed to make it impossible for a program to pull unintended data from a computer’s memory accidentally. When programmers use stalwart languages that don’t have this property, including C and C++, they have to carefully check the parameters of what data their program is going to be requesting and how—a task that even the most skilled and experienced developers will occasionally botch. By writing new software in Rust instead, even amateur programmers can be confident that they haven’t introduced any memory-safety bugs into their code.

A program’s memory is a shared resource used by all of its features and libraries. Imagine a calendar program written in a language that isn’t memory-safe. You open your calendar and then request entries for November 2, 2022, and the program fetches all information from the area of your computer’s memory assigned to store that date’s data. All good. But if the program isn’t designed with the right constraints, and you request entries for November 42, 2022, the software, instead of producing an error or other failure, may dutifully return information from a part of the memory that’s housing different data—maybe the password you use to protect your calendar or the credit card number you keep on file for premium calendar features. And if you add a birthday party to your calendar on November 42, it may overwrite unrelated data in memory instead of telling you that it can’t complete the task. These are known as “out-of-bounds” read and write bugs, and you can see how they could potentially be exploited to give an attacker improper access to data or even expanded system control.

Another common type of memory-safety bug, known as “use-after-free,” involves a situation where a program has given up its claim to a portion of memory (maybe you deleted all your calendar entries for October 2022) but mistakenly retains access. If you later request data from October 17, the program may be able to grab whatever data has ended up there. And the existence of memory-safety vulnerabilities in code also introduces the possibility that a hacker could craft, say, a malicious calendar invitation with a strategically chosen date or set of event details designed to manipulate the memory to grant the attacker remote access.

These types of vulnerabilities aren’t just esoteric software bugs. Research and auditing have repeatedly found that they make up the majority of all software vulnerabilities. So while you can still make mistakes and create security flaws while programming in Rust, the opportunity to eliminate memory-safety vulnerabilities is significant.

“Memory-safety issues are responsible for a huge, huge percentage of all reported vulnerabilities, and this is in critical applications like operating systems, mobile phones, and infrastructure,” says Dan Lorenc, CEO of the software supply-chain security company Chainguard. “Over the decades that people have been writing code in memory-unsafe languages, we’ve tried to improve and build better tooling and teach people how to not make these mistakes, but there are just limits to how much telling people to try harder can actually work. So you need a new technology that just makes that entire class of vulnerabilities impossible, and that’s what Rust is finally bringing to the table.”

Rust is not without its skeptics and detractors. The effort over the last two years to implement Rust in Linux has been controversial, partly because adding support for any other language inherently increases complexity, and partly because of debates about how, specifically, to go about making it all work. But proponents emphasize that Rust has the necessary elements—it doesn’t cause performance loss, and it interoperates well with software written in other languages—and that it is crucial simply because it meets a dire need.

“It’s less that it’s the right choice and more that it’s ready,” Lorenc, a longtime open-source contributor and researcher, says. “There are no real alternatives right now, other than not doing anything, and that’s just not an option anymore. Continuing to use memory-unsafe code for another decade would be a massive problem for the tech industry, for national security, for everything.”

One of the biggest challenges of the transition to Rust, though, is precisely all the decades that developers have already spent writing vital code in memory-unsafe languages. Writing new software in Rust doesn’t address that massive backlog. The Linux kernel implementation, for example, is starting on the periphery by supporting Rust-based drivers, the programs that coordinate between an operating system and hardware like a printer.

“When you’re doing operating systems, speed and performance is always top-of-mind, and the parts that you’re running in C++ or C are usually the parts that you just can’t run in Java or other memory-safe languages, because of performance,” Google’s Kleidermacher says. “So to be able to run Rust and have the same performance but get the memory safety is really cool. But it’s a journey. You can’t just go and rewrite 50 million lines of code overnight, so we’re carefully picking security-critical components, and over time we’ll retrofit other things.”

In Android, Kleidermacher says a lot of encryption-key-management features are now written in Rust, as is the private internet communication feature DNS over HTTPS, a new version of the ultra-wideband chip stack, and the new Android Virtualization Framework used in Google’s custom Tensor G2 chips. He adds that the Android team is increasingly converting connectivity stacks like those for Bluetooth and Wi-Fi to Rust because they are based on complex industry standards and tend to contain a lot of vulnerabilities. In short, the strategy is to start getting incremental security benefits from converting the most exposed or vital software components to Rust first and then working inward from there.

“Yes, it’s a lot of work, it will be a lot of work, but the tech industry has how many trillions of dollars, plus how many talented programmers? We have the resources," says Josh Aas, executive director of the Internet Security Research Group, which runs the memory-safety initiative Prossimo as well as the free certificate authority Let’s Encrypt. “Problems that are merely a lot of work are great."

As Rust makes the transition to mainstream adoption, the case for some type of solution to memory-safety issues seems to get made again and again every day. Just this week, a high-criticality vulnerability in the ubiquitous secure communication library OpenSSL could have been prevented if the mechanism were written in a memory-safe language. And unlike the notorious 2014 OpenSSL vulnerability Heartbleed, which lurked unnoticed for two years and exposed websites across the internet to data interception attacks, this new bug had been introduced into OpenSSL in the past few months, in spite of efforts to reduce memory-safety vulnerabilities.

“How many people right now are living the identity-theft nightmare because of a memory-safety bug? Or on a national security level, if we’re worried about cyberattacks on the United States, how much of that threat is on the back of memory-safety vulnerabilities?” Aas says. “From my point of view, the whole game now is just convincing people to put in the effort. Do we understand the threat well enough, and do we have the will.”

Wired: Latest News

More Spyware, Fewer Rules: What Trump’s Return Means for US Cybersecurity