We believe that the Qualys WAS is reporting a false positive for HTTP Request Smuggling on the Acquia platform.
Qualys published a blog about this check:
The attack request looks like this:
POST https://example.com HTTP/1.1 User-Agent: Acquia SecOps Accept: */* Content-Length: 4 Transfer-Encoding: chunked Content-Type: application/x-www-form-urlencoded Host: example.com 0 G
If this is sent to an Acquia-hosted site over http / port 80, it will be rejected immediately, e.g.:
HTTP/1.1 400 Bad Request
Sent as https to port 443 however, the request is accepted. The scanner will then send another of the same requests.
As far as the scanner is concerned, if the response to the second request is a 403, 405 or 501 that suggests that the system is vulnerable to HTTP Request Smuggling.
In some cases, a 405 response will be returned as a response to the second request on Acquia sites.
However, we disagree that this represents a HTTP Request Smuggling vulnerability.
The scanner is specifically testing for the CL:TE variant which means:
CL:TE attack method – The front-end server processes the Content-Length header, and the back-end server processes the Transfer-Encoding header.
In the case of https/443 requests, the front-end server for Acquia is nginx.
Our analysis has concluded that in fact nginx is following the HTTP spec:
If a message is received with both a Transfer-Encoding header field and a Content-Length header field, the latter MUST be ignored.
For the test requests, nginx is doing just this and ignoring the Content-Length header. It parses the request correctly as
Transfer-Encoding: chunked. This means that the payload of the request will be terminated by a 0 followed by a blank newline.
The payload in question is:
So nginx sees the first two lines of this as the termination of the chunked HTTP request. The remainder of the payload is effectively added to the buffer - more on this later.
The HTTP request which nginx has parsed is then forwarded on to the next layer of the stack (the back-end) without a Transfer-Encoding header. Instead the request is given a Content-Length header with a value of 0. This is correct, as the payload was effectively empty; all it contained was the sequence of bytes to denote its end.
Where any "smuggling" could be said to have taken place is in the treatment of the trailing content that came after the termination of the chunked payload. In the example above, the G character is added to the buffer. If another HTTP request is received from the same source (before a timeout is reached), it is treated as the start of the next HTTP request.
That's why we see nginx processing a request that looks like:
"GPOST / HTTP/1.1"
Typically, this request will then be forwarded on to the backend again without a Transfer-Encoding header but with a correct Content-Length header. In other words the request that's forwarded to the backend is not malformed in terms of the HTTP protocol, it just happens to begin with an invalid method. Therefore the backend may respond with an error to say that method's not supported or implemented.
In other words, malformed http requests are being parsed in accordance with the HTTP specs by the front end server, and it is normalising these requests and passing corrected requests on to the backend. Note that this is how the research referenced by Qualys suggests defending against these attacks:
Specific instances of this vulnerability can be resolved by reconfiguring the front-end server to normalize ambiguous requests before routing them onward.
We believe that the false positive result here is very similar - if not exactly the same - as what's described in this very good writeup on the topic:
Security reports are often disclosed to maintainers as HTTP request smuggling issues due to servers responding to multiple requests sent and this being visible as two separate responses. It should be noted that many servers support Keep-Alive and pipelining—this by itself does not make an HTTP request smuggling vulnerability. This is the case in CVE-2020-12440 reported for NGINX.