New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Downloading with HTTP/3 produces broken files #7351
Comments
I cannot reproduce. I've repeatedly run curl built with both ngtcp2 and with quiche (all from current git) and neither seem to give any problem when downloading from a h3 server running on localhost. I run nghttpx as a h3=>h1 proxy and I download a random 14MB file for testing. |
I believe this will be hard to reproduce on localhost, as no packets are dropped/reordered/delayed. Note there were no issues on 100mbit link, and it happened only on 10Gb with GSO enabled (nginx-quic was used as a server). I will try to reproduce it once again. |
ok, I've got it 100% reproducible.
I've made 2 runs of curl, then 2 runs of ngtcp2 against test.mov file,
error.log regarding curl shows 2 timedout connections:
On the client side I have:
ngtcp2 runs are good:
unfortunately, enabling debug on server makes the issue gone (server has to write more than gig of logs). |
curl:
ngtcp2 client:
I think that massive difference says something. I can only guess that we're not doing something with the ngtcp2 API that we should, but I don't know what it is. @tatsuhiro-t, any ideas? |
Curl is fast, that is nice, but it looks like curl does not pull all stream data and prematurely stops transfer. |
#8504 fixes this issue at least in my local test. |
@vl409 can you check if that fix is good for you as well? |
Yes, the patch indeed fixes the issue - files are now downloaded without errors, thanks for fixing ! Below are some stats from 10 runs of ngtcp2 and curl: curl: |
what are these numbers and what is this supposed to be a comparison of |
Downloading ~10Mb file at 10Gbit link produces broken download (garbage within content, see attachment:bug-data.tar.gz)
Downloading same file from same server using same curl binary over 100Mbit link is fine.
Downloading same file from same server using ngtcp2 client is fine.
The issue is reproducible.
The attachment contains:
export SSLKEYLOGFILE=ssl_key_file ./bin/curl -k --http3 -o digits.broken https://ip:port/digits
Versions:
curl from git, 0a0bc4a
./bin/curl -V curl 7.78.0-DEV (x86_64-pc-linux-gnu) libcurl/7.78.0-DEV OpenSSL/3.0.0 zlib/1.2.11 brotli/1.0.9 zstd/1.4.9 libidn2/2.3.1 libpsl/0.21.0 (+libidn2/2.3.0) nghttp2/1.41.0 ngtcp2/0.1.0-DEV nghttp3/0.1.0-DEV libgsasl/1.10.0 OpenLDAP/2.4.57 Release-Date: [unreleased] Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtsp smb smbs smtp smtps telnet tftp Features: alt-svc AsynchDNS brotli gsasl HSTS HTTP2 HTTP3 HTTPS-proxy IDN IPv6 Largefile libz NTLM NTLM_WB PSL SSL TLS-SRP UnixSockets zstd
ngtctp2 from git, f60c3fb447e0b4605a99fa8cd9226330aae9014a
quic-tls from git, 50a5c6fec45aed8b49e28fd23aeaf590d39becc4
nghttp3 from git, bbc426ecdbc119fd7f12a17850f64c440844af01
curl is configured using the command:
cd curl autoreconf -fi LDFLAGS="-Wl,-rpath,<ssl path>/lib" ./configure --with-ssl=<...> --with-ngtcp2=<...> --with-nghttp3=<...> --enable-alt-svc --prefix=<...>
The text was updated successfully, but these errors were encountered: