Buy commercial curl support. We
help you work out your issues, debug your libcurl applications, use the API,
port to new platforms, add new features and more. With a team lead by the
curl founder Daniel himself.
Re: Feature request: data stream upload after server response
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: fungs via curl-users <curl-users_at_lists.haxx.se>
Date: Mon, 10 Jun 2024 15:15:04 +0000
10.06.24 16:18 Daniel Stenberg 'daniel at haxx.se':
> On Mon, 10 Jun 2024, fungs via curl-users wrote:
>
>> When first listening for a status 200 response before sending the actual
>> payload, things work faster and more predictable in those cases.
> First: waiting for an HTTP response code before sending the data is violating
> the protocol simply by assuming that it can do it. An HTTP server does not
> need to respond anything at all until the entire request has been sent.
That's a good argument. At least when you know the remote side behavior, that
should be more of a theoretical issue when using this workaround as an opt-in
toggle, like all the other workarounds in curl.
>> At least for chunked data, this approach looks perfectly suitable to me from
>> a practical point of view. Even if used as a replacement in the standard
>> upload flow, it would only add a tiny bit of latency.
> That's just your guess though. In N percent of the cases there is no response
> before the data has started to get transmitted so you would have to have a
> timeout for it. And thus such a timeout would hurt N percent of users.
>
> This, because some percent of (bad) servers are slow when the clients send
> data "too early".
These are all thoughts and guesses :) Implemented in a good way, there would be
a time penalty when used with a server that does not respond until it has
received the entire request, but I didn't want to say this should be the
standard mode of operation. Knowing how the remote side works, the workaround
could be switched on. In my case, it's not about some servers being slow. The
edge router will simply return an internal server error if the initial request
is too large.
> If we want a response before sending data, the general way is to send the
> "Expect: 100-continue" header. Although admittedly this is not as widely
> supported by server end points as we would like.
>
> Does sending the Expect: header fix your use case? If not, what happens that
> makes it not work?
Wow, that was a helpful hint, thanks! I just didn't know it. I quickly read it
about up and it's IMO exactly the logic that is needed and which I was
proposing, just better because it's exactly made for this purpose with a
dedicated header and response. There is still a timeout parameter but that can
be tuned. I don't see any reason it should not work, other than the remote
endpoint part not supporting it. However, that can be changed!
Best
Johannes
>
Date: Mon, 10 Jun 2024 15:15:04 +0000
10.06.24 16:18 Daniel Stenberg 'daniel at haxx.se':
> On Mon, 10 Jun 2024, fungs via curl-users wrote:
>
>> When first listening for a status 200 response before sending the actual
>> payload, things work faster and more predictable in those cases.
> First: waiting for an HTTP response code before sending the data is violating
> the protocol simply by assuming that it can do it. An HTTP server does not
> need to respond anything at all until the entire request has been sent.
That's a good argument. At least when you know the remote side behavior, that
should be more of a theoretical issue when using this workaround as an opt-in
toggle, like all the other workarounds in curl.
>> At least for chunked data, this approach looks perfectly suitable to me from
>> a practical point of view. Even if used as a replacement in the standard
>> upload flow, it would only add a tiny bit of latency.
> That's just your guess though. In N percent of the cases there is no response
> before the data has started to get transmitted so you would have to have a
> timeout for it. And thus such a timeout would hurt N percent of users.
>
> This, because some percent of (bad) servers are slow when the clients send
> data "too early".
These are all thoughts and guesses :) Implemented in a good way, there would be
a time penalty when used with a server that does not respond until it has
received the entire request, but I didn't want to say this should be the
standard mode of operation. Knowing how the remote side works, the workaround
could be switched on. In my case, it's not about some servers being slow. The
edge router will simply return an internal server error if the initial request
is too large.
> If we want a response before sending data, the general way is to send the
> "Expect: 100-continue" header. Although admittedly this is not as widely
> supported by server end points as we would like.
>
> Does sending the Expect: header fix your use case? If not, what happens that
> makes it not work?
Wow, that was a helpful hint, thanks! I just didn't know it. I quickly read it
about up and it's IMO exactly the logic that is needed and which I was
proposing, just better because it's exactly made for this purpose with a
dedicated header and response. There is still a timeout parameter but that can
be tuned. I don't see any reason it should not work, other than the remote
endpoint part not supporting it. However, that can be changed!
Best
Johannes
>
-- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-users Etiquette: https://curl.se/mail/etiquette.htmlReceived on 2024-06-10