Re: Idea: master/client concept
Date: Thu, 19 Apr 2018 11:09:24 +0200
Hi Daniel, curl-users,
Really happy to see a move in that direction : Thanks a lot !!
What about keeping most of the way with use curl today intact, and leverage the use of "-K" (read from file/stdin/named pipe) in conjonction with "--next" ?
Instead of reading the whole configuration from file, stdin or named pipe before starting transfers, curl would parse options up to the "--next", pause the config file reading, fire off the transfer, then continue parsing where it left off
When started with "-K", curl would never quit until it receive an "EOF"
That way, an application could "remote-control" a copy of curl through a pipe and send it a set of URLs computed dynamically
As usual, curl would keep its connection open for reduced latency on subsequent transfers, and use the standard connection pool that holds N connections alive after use, in case they're reused in a subsequent request
(?? Each connection could be used as the basis for parallel transfers ??)
That would also speed up transfers when curl is used at the end of a shell pipeline, where a large list of URLs is computed and passed in
Some "--next" alternative could be used for that explicit functionality from the start to make it less likely to rock any boats. Something like: "--flush" or "--run"
Curl could even be extended to listen on a socket or tcp port, reading the config file from that socket or TCP port instead of a pipe
(Most of the following proposal comes from an exchange on curl-users mailing list from Dec 15 2015 : 'curl waits for stdin to "EOF" before firing requests when "-K -" is used to read config from stdin')
Don't know a proper way for that proposal in the wiki : Using the mailing list
Sorry about that