Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chunked POST via callback regression in 8.7.X #13229

Closed
bagder opened this issue Mar 30, 2024 · 30 comments
Closed

chunked POST via callback regression in 8.7.X #13229

bagder opened this issue Mar 30, 2024 · 30 comments
Assignees

Comments

@bagder
Copy link
Member

bagder commented Mar 30, 2024

I did this

(imported issued from https://marc.info/?l=git&m=171175679831214&w=2, authored by Jeff King (@peff))

I noticed some http-related failures in the test suite on my Debian
unstable system, which recently got an upgraded curl package. It looks
like it's related to cases where we use the remote-curl helper in
"connect" mode (i.e., protocol v2) and the http buffer is small
(requiring us to stream the data to curl). Besides just running t5551,
an easy reproduction is:

  [this works]
  $ git ls-remote https://git.kernel.org/pub/scm/git/git.git | wc -l
  1867

  [this doesn't]
  $ git -c http.postbuffer=65536 ls-remote https://git.kernel.org/pub/scm/git/git.git
  fatal: expected flush after ref listing

The error message comes from ls-remote itself, which was expecting a
FLUSH packet from the remote. Instead it gets the RESPONSE_END from
remote-curl (remember that in connect mode, remote-curl is just ferrying
bytes back and forth between ls-remote and the server).

It works with older versions of libcurl, but not 8.7.0 (or 8.7.1).
Bisecting in libcurl points to 9369c30 (lib: Curl_read/Curl_write
clarifications, 2024-02-15).

Running with GIT_TRACE_CURL=1 shows weirdness on the POST we send to
issue the ls-refs command. With older curl, I see this:

  => Send header: POST /pub/scm/git/git.git/git-upload-pack HTTP/1.1
  => Send header: Host: git.kernel.org
  => Send header: User-Agent: git/2.44.0.789.g252ee96bc5.dirty
  => Send header: Accept-Encoding: deflate, gzip
  => Send header: Content-Type: application/x-git-upload-pack-request
  => Send header: Accept: application/x-git-upload-pack-result
  => Send header: Git-Protocol: version=2
  => Send header: Transfer-Encoding: chunked
  => Send header:
  => Send data: 14..0014command=ls-refs...
  => Send data: 2a..002aagent=git/2.44.0.789.g252ee96bc5.dirty..
  [and so on until...]
  == Info: Signaling end of chunked upload via terminating chunk.

But with the broken version, I get:

  => Send header: POST /pub/scm/git/git.git/git-upload-pack HTTP/1.1
  => Send header: Host: git.kernel.org
  => Send header: User-Agent: git/2.44.0.789.g252ee96bc5.dirty
  => Send header: Accept-Encoding: deflate, gzip, br, zstd
  => Send header: Content-Type: application/x-git-upload-pack-request
  => Send header: Accept: application/x-git-upload-pack-result
  => Send header: Git-Protocol: version=2
  => Send header: Transfer-Encoding: chunked
  => Send header:
  => Send data, 0000000014 bytes (0x0000000e)
  => Send data: 4..0014..0....
  == Info: upload completely sent off: 14 bytes

So we only get the first 4 bytes, and then we quit (the double mention
of 14 is confusing, but I think it is both the size of the pkt-line
("command=ls-refs\n") but also the length of the 4-byte string when
framed with chunked transfer-encoding). Those 4 bytes are the first
thing returned by rpc_out(), which we use as our CURLOPT_READFUNCTION.

It's possible that we're doing something wrong with our read/write
function callbacks. But I don't see how; we say "here's 4 bytes", but
then we never get called again. It's like curl is giving up on trying to
read the post input early for some reason.

I'm not sure how to dig further. That commit is pretty big and scary. I
did check that the tip of master in curl.git is still affected (I'd
hoped maybe the 0-length write fixes in b30d694 would be related,
but that's not it).

Ideas?

I expected the following

No response

curl/libcurl version

8.7.0 and 8.7.1

operating system

Linux, but probably not relevant

@bagder
Copy link
Member Author

bagder commented Mar 30, 2024

@icing it feels like a topic for you!

@bagder
Copy link
Member Author

bagder commented Mar 30, 2024

I wrote up debugit.c in an attempt to reproduce, but I fail.

@bagder
Copy link
Member Author

bagder commented Mar 30, 2024

@peff: as the difference between the working and the non-working command lines seems to be http.postbuffer=65536 - is that perhaps setting the CURLOPT_BUFFERSIZE or CURLOPT_UPLOAD_BUFFERSIZE to this value?

@christian-heusel
Copy link

This bug report in Arch Linux might describe the same bug: https://gitlab.archlinux.org/archlinux/packaging/packages/curl/-/issues/6

@christian-heusel
Copy link

I've played a bit around with that and it seems like for some repos it works and for others it does not:

  • This fails:

    source=("git+https://github.com/xbmc/xbmc.git")
    

    With the following error:

    ==> Making package: testpkg 123-1 (Sat 30 Mar 2024 08:55:56 PM CET)
    ==> WARNING: Skipping dependency checks.
    ==> Retrieving sources...
      -> Cloning xbmc git repo...
    Cloning into bare repository '/home/chris/.cache/packaging/sources/xbmc'...
    error: RPC failed; HTTP 400 curl 22 The requested URL returned error: 400
    ==> ERROR: Failure while downloading xbmc git repo
        Aborting...
    

    log.txt with GIT_TRACE_CURL=1

  • while this fails not:

    source=("git+https://github.com/christian-heusel/aur.git")
    

I'm scratching my head now a bit whats the difference between the two as everything else stays the same. I can also confirm that the 8.6 release works as intended.

@bagder
Copy link
Member Author

bagder commented Mar 30, 2024

error: RPC failed; HTTP 400 curl 22 The requested URL returned error: 400

That looks like curl gets a HTTP response code 400 back. It smells like a different issue and probably even in the server side.

@christian-heusel
Copy link

That looks like curl gets a HTTP response code 400 back. It smells like a different issue and probably even in the server side.

I have bisected the issue and landed on the same commit:

9369c30cd ("lib: Curl_read/Curl_write clarifications")
Click to see bisect log
git bisect start
# status: waiting for both good and bad commits
# bad: [de7b3e89218467159a7af72d58cea8425946e97d] RELEASE-PROCEDURE: remove old release dates, add new pending ones
git bisect bad de7b3e89218467159a7af72d58cea8425946e97d
# status: waiting for good commit(s), bad commit known
# good: [5ce164e0e9290c96eb7d502173426c0a135ec008] RELEASE-NOTES: synced
git bisect good 5ce164e0e9290c96eb7d502173426c0a135ec008
# good: [17d302e56221f5040092db77d4f85086e8a20e0e] setopt: Fix disabling all protocols
git bisect good 17d302e56221f5040092db77d4f85086e8a20e0e
# bad: [05268cf801a193b68411cfa298413c3e5ca79d4f] sha512_256: add support for GnuTLS and OpenSSL
git bisect bad 05268cf801a193b68411cfa298413c3e5ca79d4f
# bad: [e455490c3c424f3a93717540369f77df79da727d] _VARIABLES.md: improve the description
git bisect bad e455490c3c424f3a93717540369f77df79da727d
# bad: [89733e2dd26b51c16b2c9f090881127dbba58ce8] configure: build & install shell completions when enabled
git bisect bad 89733e2dd26b51c16b2c9f090881127dbba58ce8
# good: [745b99e1e841cd833bc0d02fcfbf9c0e538690b3] KNOWN_BUGS: FTPS upload, FileZilla, GnuTLS and close_notify
git bisect good 745b99e1e841cd833bc0d02fcfbf9c0e538690b3
# bad: [9369c30cd87c041cf983bcdfabd1570980abbaf6] lib: Curl_read/Curl_write clarifications
git bisect bad 9369c30cd87c041cf983bcdfabd1570980abbaf6
# good: [65405459d58abed6d5b6ea20ccdca972aa807f9c] getparam: make --ftp-ssl work again
git bisect good 65405459d58abed6d5b6ea20ccdca972aa807f9c
# good: [8d67c61c47a0dcc3e150d33ab941a205e76ce9eb] curldown: Fix email address in Copyright
git bisect good 8d67c61c47a0dcc3e150d33ab941a205e76ce9eb
# first bad commit: [9369c30cd87c041cf983bcdfabd1570980abbaf6] lib: Curl_read/Curl_write clarifications

@peff
Copy link
Contributor

peff commented Mar 30, 2024

@bagder I tried a very similar reproduction recipe, but couldn't convince it to fail. I'm not quite sure what Git is doing differently. Your question about CURLOPT_BUFFERSIZE is a good one, but no, we never use that option. I think the reason it is important is because it triggers Git to stream the input as Transfer-Encoding: chunked with a CURLOPT_READFUNCTION rather than feeding it all as a single buffer.

@peff
Copy link
Contributor

peff commented Mar 31, 2024

@christian-heusel I think you're seeing the same problem. Whether you see the HTTP 400 or not depends on whether you are hitting GitHub or a different server like git.kernel.org.

The root of the issue is that the POST we are sending is truncated, and contains just the bytes 0014. In Git's protocol that is the start of a pkt-line, and the server expects to see at least another 0x10 bytes of data. When stock Git programs are invoked by an HTTP server, they see the bogus input and just give up with an empty response, causing the server to send back an HTTP 200. But GitHub runs a custom frontend that understands both HTTP and the Git protocol, and so it actually sends the HTTP 400.

But the root cause is the same: the client's POST was invalid, because curl didn't keep invoking our CURLOPT_READFUNCTION.

As for why the failure is seen with one repo or not the other: it is related to the size of the remote repository. It looks like you aren't setting http.postBuffer. Doing so will cause you to see the failure consistently, even for ls-remote versus a clone, because every request (even just asking to list the refs) will require Git to stream the POST data using CURLOPT_READFUNCTION.

But for a clone, we'll also follow that up by asking for all of the ref tips in another POST. I'm guessing you are cloning with --mirror under the hood (or otherwise asking for all refs). In that case, aur.git will require a POST of only a few hundred bytes, and we can just buffer it. But xbmc.git will require over a megabyte, which is the default http.postBuffer size.

If you increase the buffer size (causing Git to buffer the whole request itself and then hand it to curl as a single pointer via CURLOPT_POSTFIELDS), then it works:

git -c http.postbuffer=2M clone --mirror https://github.com/xbmc/xbmc.git

@bagder
Copy link
Member Author

bagder commented Mar 31, 2024

@peff in the case where it stops after those first 14 bytes: can you check what happens in the next invocation of the callback? (since those 14 bytes are what is sent after the callback returns after the first invoke) If the next one returns a zero, then that would explain the issue - but something tells me that is not what is going on...

@peff
Copy link
Contributor

peff commented Mar 31, 2024

@bagder Nope, we are only invoked once. If I instrument Git like this:

diff --git a/remote-curl.c b/remote-curl.c
index 1161dc7fed..3aaf0c96cc 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -682,6 +682,8 @@ static size_t rpc_out(void *ptr, size_t eltsize,
 	size_t avail = rpc->len - rpc->pos;
 	enum packet_read_status status;
 
+	warning("in read func: %d bytes allowed by curl, we have %d ready",
+		(int)max, (int)avail);
 	if (!avail) {
 		rpc->initial_buffer = 0;
 		rpc->len = 0;
@@ -720,6 +722,7 @@ static size_t rpc_out(void *ptr, size_t eltsize,
 		avail = max;
 	memcpy(ptr, rpc->buf + rpc->pos, avail);
 	rpc->pos += avail;
+	warning("passed curl %d bytes", (int)avail);
 	return avail;
 }
 

I get:

$ git -c http.postbuffer=65536 ls-remote https://git.kernel.org/pub/scm/git/git.git
warning: in read func: 4 bytes allowed by curl, we have 20 ready
warning: passed curl 4 bytes
fatal: expected flush after ref listing

It is interesting that curl is only offering us 4 bytes of space to write. This could be because a previous request sent only 4 bytes (in this case it is because before streaming, Git sends a "probe" request of just 4 bytes to make sure the endpoint works at all).

And indeed, if I comment out that probe (in probe_rpc()), then curl offers us much more space, and everything works fine. So maybe the issue is that curl is confused when the READFUNCTION fills up its internal buffer?

It's also possible that Git isn't correctly re-setting the handle between the requests, but we do reset POSTFIELDS and POSTFIELDSIZE to NULL/0.

@peff
Copy link
Contributor

peff commented Mar 31, 2024

Interestingly, adding this seem to make it work, at least for kernel.org:

diff --git a/remote-curl.c b/remote-curl.c
index 1161dc7fed..8f514e235c 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -962,6 +962,7 @@ static int post_rpc(struct rpc_state *rpc, int stateless_connect, int flush_rece
 		 */
 		headers = curl_slist_append(headers, "Transfer-Encoding: chunked");
 		rpc->initial_buffer = 1;
+		curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, -1L);
 		curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, rpc_out);
 		curl_easy_setopt(slot->curl, CURLOPT_INFILE, rpc);
 		curl_easy_setopt(slot->curl, CURLOPT_SEEKFUNCTION, rpc_seek);

But it wasn't necessary with older curl. Should we have been setting that all along? Or is curl internally failing to treat a zero size along with a NULL POSTFIELD as "we should use READFUNCTION instead"?

Curiously it doesn't seem to fix the problem when hitting GitHub. It looks like maybe we are not sending the chunked encoding header then? So maybe one is not supposed to pass "-1" for that field.

@bagder
Copy link
Member Author

bagder commented Mar 31, 2024

Should we have been setting that all along?

No, it should not be necessary. -1 is the presumed value/default if not set. This is a regression.

@bagder
Copy link
Member Author

bagder commented Mar 31, 2024

I need to confer with @icing and decide on a way forward. I think he might be having an Easter time-off like we all should have! 😁

@bagder
Copy link
Member Author

bagder commented Mar 31, 2024

Here is an updated debugit.c that I believe reproduces the issue in the same style git does it: first a short POST then one using the read callback, but the callback post does not (if this example source is modified) reset the postfieldsize and then sends an aborted chunked stream.

@icing
Copy link
Contributor

icing commented Apr 1, 2024

Should we have been setting that all along?

No, it should not be necessary. -1 is the presumed value/default if not set. This is a regression.

I agree. Just trying to understand how we manage that and where the regression came in:

  1. git makes a GET with the handle
  2. git makes a POST with CURLOPT_POSTFIELDS and CURLOPT_POSTFIELDSIZE_LARGE == 4
  3. git makes a POST with CURLOPT_READFUNCTION, `data->set.postfieldsize is still 4 -> FAIL

There is no call to curl_easy_reset().

@icing
Copy link
Contributor

icing commented Apr 1, 2024

Please see #13257 for the fix on this. I verified with a git master checkout. Please report back if this works for you as well.

@icing
Copy link
Contributor

icing commented Apr 1, 2024

As for the git code I peeked into for this regression, I'd recommend not setting the "chunked" header, but setting the post field size to -1. Curl will then do the appropriate thing for all HTTP versions.

@peff
Copy link
Contributor

peff commented Apr 1, 2024

@icing Thanks, your patch seems to fix my stand-alone reproduction, and it makes my ls-remote to kernel.org work.

Curiously, though, trying the same thing against GitHub continues to fail. E.g.:

git -c http.postbuffer=65536 ls-remote https://github.com/git/git

The big difference there is that it is using HTTP/2. Looking at verbose output, it seems that in 8.6.0, we'd stream the input directly. But in 8.7.1 + your fix, we send chunked transfer-encoding. Here's a diff of two traces (preimage is 8.6.0, postimage is your PR):

 => Send header, 0000000260 bytes (0x00000104)
 => Send header: POST /git/git/git-upload-pack HTTP/2
 => Send header: Host: github.com
 => Send header: User-Agent: git/2.44.0.413.gd6fd04375f.dirty
 => Send header: Accept-Encoding: deflate, gzip
 => Send header: Content-Type: application/x-git-upload-pack-request
 => Send header: Accept: application/x-git-upload-pack-result
 => Send header: Git-Protocol: version=2
 => Send header:
-=> Send data, 0000000020 bytes (0x00000014)
-=> Send data: 0014command=ls-refs.
-=> Send data, 0000000042 bytes (0x0000002a)
-=> Send data: 002aagent=git/2.44.0.413.gd6fd04375f.dirty
-=> Send data, 0000000022 bytes (0x00000016)
-=> Send data: 0016object-format=sha1
-=> Send data, 0000000004 bytes (0x00000004)
-=> Send data: 0001
+=> Send data, 0000000026 bytes (0x0000001a)
+=> Send data: 14..0014command=ls-refs...
+=> Send data, 0000000048 bytes (0x00000030)
+=> Send data: 2a..002aagent=git/2.44.0.413.gd6fd04375f.dirty..
+=> Send data, 0000000028 bytes (0x0000001c)
+=> Send data: 16..0016object-format=sha1..
[etc...]

I'm not sure if this is caused by the same commit or not. I'll see if I can bisect, applying your new fix on top of the intermediate states.

@icing
Copy link
Contributor

icing commented Apr 1, 2024

@peff, ah, right. I amended the PR with the additional fix that suppresses chunked encoding completely for HTTP/2 and higher.

@peff
Copy link
Contributor

peff commented Apr 1, 2024

I'm not sure if this is caused by the same commit or not. I'll see if I can bisect, applying your new fix on top of the intermediate states.

Yeah, it is the same commit (though perhaps a different problem introduced by the same commit). If I build the parent of 9369c30, then the POST that git -c http.postbuffer=65536 ls-remote https://github.com/git/git sends starts with 0014command=ls-refs\n, as expected. Moving to 9369c30, we get the truncated output: we send only the 4 bytes 0014. Adapting the fix from your PR like so:

diff --git a/lib/http.c b/lib/http.c
index 3f17f7cd9..56c60f9f9 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -2304,7 +2304,8 @@ CURLcode Curl_http_req_complete(struct Curl_easy *data,
     }
     else { /* we read the bytes from the callback */
       Curl_pgrsSetUploadSize(data, http->postsize);
-      result = Client_reader_set_fread(data, http->postsize);
+      bool chunked = !!Curl_checkheaders(data, STRCONST("Transfer-Encoding"));
+      result = Client_reader_set_fread(data, chunked? -1 : http->postsize);
     }
     http->sending = HTTPSEND_BODY;
     break;

curl will now read all of the input from the READFUNCTION, but it sends 14\n\n0014command=ls-refs\n\n\n (just as with the fix in your PR).

Switching Git to set POSTFIELDSIZE to -1L (rather than setting the transfer-encoding header) does fix it. I'll look into making that change for Git, but if the goal of your PR is to keep the manual header mechanism working, I think it needs something else for HTTP/2.

@peff
Copy link
Contributor

peff commented Apr 1, 2024

@icing Ah, our comments just crossed. I just tried the fix you pushed up to the PR, and it works! I think that fixes all of the cases I've seen.

@christian-heusel
Copy link

I have applied the mentioned PR on top of 8.7.1 and sadly it does not fix the issue that I have mentioned in #13229 (comment)

See the following gist for the debug log:

$ export GIT_TRACE_CURL=1
$ makepkg -od

https://gist.githubusercontent.com/christian-heusel/f0284be642a3c0952a66d82f6bd93d3a/raw/f4ae55819697650bc99934ca5af47c5b38b2b1e1/log.txt

Sadly its huge, so if you need anything more specific let me know, I'll happily provide more (useful) input.

@icing
Copy link
Contributor

icing commented Apr 1, 2024

Did you try the PR in its latest revision? The last fix was for HTTP/2 which I see in the log.

@peff
Copy link
Contributor

peff commented Apr 1, 2024

Yes, looking at the trace in #13229 (comment) it is sending the chunked encoding over HTTP/2, which would be consistent with using only the earlier version of the PR. The failing xmbc clone works fine for me with the the most recent version of the PR.

@christian-heusel
Copy link

Did you try the PR in its latest revision?

I have applied the following patch, which includes the changes added later I think: https://gist.github.com/christian-heusel/f4c1a95c29a36ad614f7fab368a8fdf1

@icing
Copy link
Contributor

icing commented Apr 1, 2024

The logs look, as @peff said, like the POST request body is sent using chunked encoding. This is exactly what the last addition to the patch fixed. Could you double check that the failure happens with the full patch active? Thanks!

@christian-heusel
Copy link

christian-heusel commented Apr 1, 2024

Sorry for the additonal noise, the later added changes also fixed it for me 👍🏻

I had the correct patchfile locally, but not on the build machine I used 😢 🤦🏻 I even double checked the local patchfile before posting #13229 (comment) (which ofc did not help) because I was fearing for this to happen 😆

Thanks for posting a fix so quickly! 🥳 ❤️

@icing
Copy link
Contributor

icing commented Apr 1, 2024

Thanks for clarifying. Such things happen to everyone, speaking from own experience.😌

@bagder bagder closed this as completed in 721941a Apr 2, 2024
@bagder
Copy link
Member Author

bagder commented Apr 2, 2024

Thank you everyone!

gitster pushed a commit to git/git that referenced this issue Apr 2, 2024
In get_active_slot(), we return a CURL handle that may have been used
before (reusing them is good because it lets curl reuse the same
connection across many requests). We set a few curl options back to
defaults that may have been modified by previous requests.

We reset POSTFIELDS to NULL, but do not reset POSTFIELDSIZE (which
defaults to "-1"). This usually doesn't matter because most POSTs will
set both fields together anyway. But there is one exception: when
handling a large request in remote-curl's post_rpc(), we don't set
_either_, and instead set a READFUNCTION to stream data into libcurl.

This can interact weirdly with a stale POSTFIELDSIZE setting, because
curl will assume it should read only some set number of bytes from our
READFUNCTION. However, it has worked in practice because we also
manually set a "Transfer-Encoding: chunked" header, which libcurl uses
as a clue to set the POSTFIELDSIZE to -1 itself.

So everything works, but we're better off resetting the size manually
for a few reasons:

  - there was a regression in curl 8.7.0 where the chunked header
    detection didn't kick in, causing any large HTTP requests made by
    Git to fail. This has since been fixed (but not yet released). In
    the issue, curl folks recommended setting it explicitly to -1:

      curl/curl#13229 (comment)

    and it indeed works around the regression. So even though it won't
    be strictly necessary after the fix there, this will help folks who
    end up using the affected libcurl versions.

  - it's consistent with what a new curl handle would look like. Since
    get_active_slot() may or may not return a used handle, this reduces
    the possibility of heisenbugs that only appear with certain request
    patterns.

Note that the recommendation in the curl issue is to actually drop the
manual Transfer-Encoding header. Modern libcurl will add the header
itself when streaming from a READFUNCTION. However, that code wasn't
added until 802aa5ae2 (HTTP: use chunked Transfer-Encoding for HTTP_POST
if size unknown, 2019-07-22), which is in curl 7.66.0. We claim to
support back to 7.19.5, so those older versions still need the manual
header.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Nekosha pushed a commit to Nekosha/git-po that referenced this issue Apr 16, 2024
In get_active_slot(), we return a CURL handle that may have been used
before (reusing them is good because it lets curl reuse the same
connection across many requests). We set a few curl options back to
defaults that may have been modified by previous requests.

We reset POSTFIELDS to NULL, but do not reset POSTFIELDSIZE (which
defaults to "-1"). This usually doesn't matter because most POSTs will
set both fields together anyway. But there is one exception: when
handling a large request in remote-curl's post_rpc(), we don't set
_either_, and instead set a READFUNCTION to stream data into libcurl.

This can interact weirdly with a stale POSTFIELDSIZE setting, because
curl will assume it should read only some set number of bytes from our
READFUNCTION. However, it has worked in practice because we also
manually set a "Transfer-Encoding: chunked" header, which libcurl uses
as a clue to set the POSTFIELDSIZE to -1 itself.

So everything works, but we're better off resetting the size manually
for a few reasons:

  - there was a regression in curl 8.7.0 where the chunked header
    detection didn't kick in, causing any large HTTP requests made by
    Git to fail. This has since been fixed (but not yet released). In
    the issue, curl folks recommended setting it explicitly to -1:

      curl/curl#13229 (comment)

    and it indeed works around the regression. So even though it won't
    be strictly necessary after the fix there, this will help folks who
    end up using the affected libcurl versions.

  - it's consistent with what a new curl handle would look like. Since
    get_active_slot() may or may not return a used handle, this reduces
    the possibility of heisenbugs that only appear with certain request
    patterns.

Note that the recommendation in the curl issue is to actually drop the
manual Transfer-Encoding header. Modern libcurl will add the header
itself when streaming from a READFUNCTION. However, that code wasn't
added until 802aa5ae2 (HTTP: use chunked Transfer-Encoding for HTTP_POST
if size unknown, 2019-07-22), which is in curl 7.66.0. We claim to
support back to 7.19.5, so those older versions still need the manual
header.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

4 participants