Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

curl 7.75 - decompression error with limit-rate #6640

Closed
a-denoyelle opened this issue Feb 22, 2021 · 5 comments
Closed

curl 7.75 - decompression error with limit-rate #6640

a-denoyelle opened this issue Feb 22, 2021 · 5 comments
Assignees

Comments

@a-denoyelle
Copy link
Contributor

a-denoyelle commented Feb 22, 2021

Hi,

I currently experience an error with curl when running haproxy compresion reg-test suite. We use curl as an http-client with the option --compressed --limit-rate 300K. Curl cannot decompress the output and returns various error such as "Error while processing content unencoding: invalid block type". I do not encounter the issue with wget.

I will provide a reproducible scenario soon.

curl/libcurl version

curl 7.75.0 (x86_64-pc-linux-gnu) libcurl/7.75.0 OpenSSL/1.1.1j zlib/1.2.11 zstd/1.4.8 libidn2/2.3.0 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.9.0 nghttp2/1.41.0
Release-Date: 2021-02-03
Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS GSS-API HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd

operating system

Linux hostname 5.10.16-arch1-1 #1 SMP PREEMPT Sat, 13 Feb 2021 20:50:18 +0000 x86_64 GNU/Linux

@bagder bagder added the HTTP label Feb 22, 2021
@a-denoyelle
Copy link
Contributor Author

a-denoyelle commented Feb 22, 2021

To reproduce the issue, I'm using haproxy compiled with lua support and curl.

Here is the haproxy config file. I named it compression.conf.

global
        lua-load lua_validation.lua

defaults
        mode http

frontend fe
        bind :20080
        compression algo gzip
        compression type text/plain
        compression offload
        use_backend be

backend be
        server big_payload_hap 127.0.0.1:20081

listen big_payload
        bind :20081
        http-request use-service lua.fileloader-http01

I use a lua script to be able to generate a big payload. Place it in the same directory as haproxy config with the name 'lua_validation.lua'.

local data = "abcdefghijklmnopqrstuvwxyz"
local responseblob = ""
for i = 1,10000 do
  responseblob = responseblob .. "\r\n" .. i .. data:sub(1, math.floor(i % 27))
end

http01applet = function(applet)
  local response = responseblob
  applet:set_status(200)
  applet:add_header("Content-Type", "text/plain")
  applet:add_header("Content-Length", string.len(response)*10)
  applet:start_response()
  for i = 1,10 do
    applet:send(response)
  end
end

core.register_service("fileloader-http01", "http", http01applet)

Run haproxy :
$ haproxy -db -f compression.conf

Then the curl client to reproduce the error :
$ curl -o /dev/null --compressed --limit-rate 300K http://127.0.0.1:20080/

For what it's worth, I have bisected curl repository (I cannot reproduce the issue with 7.74). The issue seems to be related to the following commit :

commit b68dc34af341805aeb7b371541a2b4074da76217 (HEAD, refs/bisect/bad)
multi: set the PRETRANSFER time-stamp when we switch to PERFORM

@a-denoyelle a-denoyelle changed the title curl 7.55 - decompression error with limit-rate curl 7.75 - decompression error with limit-rate Feb 22, 2021
@bagder
Copy link
Member

bagder commented Feb 22, 2021

I don't have any machines around to easily setup such a test page. Can we use a public URL with compressed content?

./src/curl --compressed --limit-rate 10K https://curl.se/changes.html -o /dev/null

... works for me with curl from git.

@bagder
Copy link
Member

bagder commented Feb 22, 2021

Thanks to a private URL from @a-denoyelle I can reproduce and I realize the mistake with b68dc34. Stand by for PR.

@bagder bagder self-assigned this Feb 22, 2021
bagder added a commit that referenced this issue Feb 22, 2021
... since the state machine might go to RATELIMITING and then back to
PERFORMING doing once-per-transfer inits in that function is wrong and
it caused problems with receiving chunked HTTP and it set the
PRETRANSFER time much too often...

Regression from b68dc34 (shipped in 7.75.0)

Reported-by: Amaury Denoyelle
Fixes #6640
@bagder bagder closed this as completed in bf60147 Feb 22, 2021
@a-denoyelle
Copy link
Contributor Author

Thanks @bagder for the quick fix, I confirm the issue is resolved on my side.

@bagder
Copy link
Member

bagder commented Feb 22, 2021

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

2 participants