curl-library
Re: maximizing performance when issuing a large number of GET/PUT requests to a single server
Date: Thu, 11 Aug 2011 12:01:32 +0200 (CEST)
On Wed, 10 Aug 2011, Alex Loukissas wrote:
> I am looking to refactor this code now, so that I get a speedup by
> parallelism (the order that these requests are performed doesn't need to be
> FIFO or anything, as long as they all succeed), and connection reuse (i.e.
> filling up the pipe).
In many situations you won't gain any performance by doing parallell uploads
to a single server, and you'll get a simpler implementation by doing serial
uploads so I'd recommend that. At least for a first shot.
Just make sure you re-use the same CURL handle _without_ doing cleanup/init
between each individual transfer.
> One of the things I've looked at is going into the curl_multi_* interface,
> have a set number of handles and reuse them until all requests have been
> served. My question is the following: is there a benefit of doing this
> (which from what I understand is single-threaded and serialized --see:
> http://curl.haxx.se/mail/lib-2011-06/0065.html) over having a thread pool of
> curl_easy_handles that each has a connection to the server?
For a low degree of parallellism, you probably will not see any speed
differences between the multi interface and doing threaded easy interface
transfers. For a large number of simultaneous transfers (where large is at
least above 100), you should see better performance by going with the
multi_socket API. Mostly it is a matter of how you want to design your
software.
-- / daniel.haxx.se ------------------------------------------------------------------- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.htmlReceived on 2011-08-11