MISC-23: Nginx Integer Overflow (CVE-2017-7529)



Issue Information

Issue Type: Bug
 
Priority: Major
Status: Closed

Reported By:
Ben Tasker
Assigned To:
Ben Tasker
Project: Miscellaneous (MISC)
Resolution: Incomplete (2019-07-16 16:03:56)
Labels: Integer, NGinx, Overflow, Vulnerability,

Created: 2017-07-13 08:38:24
Time Spent Working


Description
Nginx have announced a fix to an information disclosure vulnerability arising from an integer overflow when multiple ranges are requested (via the HTTP Range header)

When using nginx with standard modules this allows an attacker to
obtain a cache file header if a response was returned from cache.
In some configurations a cache file header may contain IP address
of the backend server or other sensitive information.


The issue affects a wide range of versions

- nginx 0.5.6 - 1.13.2.

The issue is fixed in nginx 1.13.3, 1.12.1.

There's a known mitigation - limiting the number of ranges permitted in a request to 1 (which suggests it should be possible to exploit with just 2 ranges)

max_ranges 1;


Issue Links

Security Advisory (Mailarchives)
Toggle State Changes

Activity


The patch for the issue is fairly simple, it just implements a bounds check

diffsrc/http/modules/ngx_http_range_filter_module.c b/src/http/modules/ngx_http_range_filter_module.c
--- src/http/modules/ngx_http_range_filter_module.c
+++ src/http/modules/ngx_http_range_filter_module.c
@@ -377,6 +377,10 @@ ngx_http_range_parse(ngx_http_request_t 
             range->start = start;
             range->end = end;
 
+            if (size > NGX_MAX_OFF_T_VALUE - (end - start)) {
+                return NGX_HTTP_RANGE_NOT_SATISFIABLE;
+            }
+
             size += end - start;
 
             if (ranges-- == 0) {


Need to look at the module to see how size is calculated initially, but from the patch, it looks as though exploitation is just a case of requesting ranges so that the total number of bytes requested (i.e. the total of the deltas between each start and end) is sufficiently high to overflow the integer, so that Nginx winds up ignoring the usual offset when reading the cachefile (so it no longer excludes the cache file header)
OK, so within ngx_http_range_parse, size is of type "off_t" (which is a POSIX type rather than being defined in ANSI C)

It's size is controlled by a macro - _FILE_OFFSET_BITS based upon the platform, but should be 64 bits (8 bytes) on Linux ( https://github.com/nginx/nginx/blob/master/src/os/unix/ngx_linux_config.h#L16 )

So, with a quick calculation, it looks like we have to request an inordinate amount of data in order to trigger the issues
((2^64-1)/8)/1024/1024/1024/1024

2097152 TB


It's possible I've made some wrong assumptions about the patch (should really find time to read the module), but that might also be why it's been present for so long without detection.

That said, NGX_MAX_OFF_T_VALUE on Linux (by the looks of it, on all *nix) gets calculated and set at configure time. On a 64 bit Ubuntu test box, I get the value

objs/ngx_auto_config.h:#define NGX_MAX_OFF_T_VALUE  9223372036854775807LL


Which is still a butt-load of data
NGinx does validate that the ranges fall within the bounds of the file's size, so all ranges need to be valid.

What I don't know is whether it de-duplicates the requested ranges, so will have to test
~$ curl -o /dev/null -v http://streamingtest.bentasker.co.uk/Big_Buck_Bunny-Flash/bbb_sunflower_native_60fps_stereo_abl.mp4 -H "Range: bytes=0-1024,0-1024"  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 46.32.254.153...
* Connected to streamingtest.bentasker.co.uk (46.32.254.153) port 80 (#0)
> GET /Big_Buck_Bunny-Flash/bbb_sunflower_native_60fps_stereo_abl.mp4 HTTP/1.1
> Host: streamingtest.bentasker.co.uk
> User-Agent: curl/7.47.0
> Accept: */*
> Range: bytes=0-1024,0-1024
> 
< HTTP/1.1 206 Partial Content
< Server: nginx/1.8.0
< Date: Thu, 13 Jul 2017 08:36:32 GMT
< Content-Type: multipart/byteranges; boundary=00000000000000018833
< Content-Length: 2264
< Last-Modified: Thu, 02 Jan 2014 17:39:27 GMT
< Connection: keep-alive
< ETag: "52c5a44f-3fcd6d05"
< X-Clacks-Overhead: GNU Terry Pratchett
< 
{ [1541 bytes data]
100  2264  100  2264    0     0  58808      0 --:--:-- --:--:-- --:--:-- 59578


Seems not, so you could potentially request the entire range repeatedly to try and trigger this issue
Interestingly, it doesn't work on a couple of the boxes in my CDN, I just get a 200 back if I try and request multiple ranges. But Jinx does support it (though I don't see max_ranges set for the ones that don't support it, and they should all be running the same version and config. Weird.... will have to look into that).

It seems it also doesn't work if you omit the end range:
-H "Range: bytes=0-,0-"


results in a 200 with the content length being one instance of the file, so end bytes must be defined.

Will come back to this some more later, given the high number of bytes needed to trigger an integer overflow and limits on the length of HTTP request headers will probably need to set up a system with a pretty sizeable cache partition (and a large test file)
btasker changed Project from 'PWNIT' to 'Miscellaneous'
btasker changed Key from 'PWNIT-1' to 'MISC-23'
This is interesting, it looks like if your ranges overlap enough to encompass the entire file Nginx will just ignore the ranges and give one instance of the file. So you can only request a portion of the file, though it does look like you can re-request it.
I'm closing this as Incomplete. Because of other stuff coming up it's sat on my To-do list for over 2 years now.

Given the time that's passed, there's probably not much value in finishing off anyway
btasker changed status from 'Open' to 'Resolved'
btasker added 'Ben Tasker' to assignee
btasker added 'Incomplete' to resolution
btasker changed status from 'Resolved' to 'Closed'