curl / Mailing Lists / curl-library / Single Mail
Buy commercial curl support from WolfSSL. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder himself.

Re: Getting a list of easy handles in a multi handle - possible?

From: Henrik Holst via curl-library <>
Date: Tue, 29 Aug 2023 00:08:59 +0200

wouldn't an API that updates the list via callbacks when handles are added
and removed equal to the application just keeping the list internally like
one would have to do today? I don't really see the difference between those
two cases.

At least they way I see this is that the application does not want to keep
an internal list and instead at some point in time needs the list and thus
then asks for a snapshot of the handles added to a multi so it can perform
something on them right there and then. But perhaps I'm missing some other
use case where the application needs to keep this list around without
keeping it around so to speak.


Den tis 29 aug. 2023 kl 00:03 skrev bch via curl-library <>:

> On Mon, Aug 28, 2023 at 14:59 bch <> wrote:
>> On Mon, Aug 28, 2023 at 14:32 Daniel Stenberg via curl-library <
>>> wrote:
>>> On Mon, 28 Aug 2023, Paul Fotheringham wrote:
>>> > That thread seemed to peter out without a reply to Brad's question. I
>>> too
>>> > would benefit from an example of why calling libcurl functions from
>>> within
>>> > the callback is tricky to handle. I'm not saying it's not tricky to
>>> handle,
>>> > I just ask from a position of ignorance.
>>> First out, we already deny most libcurl functions from getting called
>>> from
>>> their callbacks precisely because of the challenges.
>>> When libcurl calls a callback it is "in the middle of something", while
>>> when
>>> it returns back to the caller, it has finished doing it and stores the
>>> state
>>> correctly somewhere in order to be able to continue from there at the
>>> next
>>> invokation.
>>> When a callback calls libcurl *back* in a recursive manner, it is hard
>>> to make
>>> sure that alls states, pointers and variables are handled correctly.
>>> Since the
>>> recursively called function changes internals and then returns back to
>>> the
>>> callback that returns back into libcurl again ... into another context
>>> which
>>> may have local state or variables that now no longer actually are
>>> correct
>>> because things have been changed.
>>> It is of course quite *possible* to make this work (it is just code
>>> after all)
>>> but it requires deliberate attention to this and quite a lot of testing
>>> to
>>> make sure lots of edge cases are covered. And we don't do nor have that.
>>> As I
>>> said: we prevent the recursive call instead to protect the application
>>> from
>>> problems.
>>> By avoiding recursive calls into the library our lives are much easier.
>> That all makes sense -
>> What I was thinking of as the strength of my original proposal is
>> up-to-the-moment canonical truth (because calls are dispatched from within
>> curl itself with its own known state); it might be starting to get clunky
>> (food for discussion) - but perhaps the contract
>> 1) whitelists known-good functions, or
>> 2) CURLM* and CURL* handles are for reading purposes only, and subsequent
>> ops are relegated to (eg) querying the callers own userdata upon return of
>> curlm_easy_iter() and operating in “typical scope” rather than some
>> arbitrary callback depth.
> Though then the resultant list is subject to some staleness, etc as
> previous discussion… perhaps not as interesting as I initially hoped.
>> -bch
>>> --
>>> /
>>> | Commercial curl support up to 24x7 is available!
>>> | Private help, bug fixes, support, ports, new features
>>> |
>>> --
>>> Unsubscribe:
>>> Etiquette:
>> --
> Unsubscribe:
> Etiquette:

Received on 2023-08-29