curl-library
Re: extract max keepalive requests configuration using a discovery loop
Date: Tue, 27 Sep 2005 14:18:47 +0200 (CEST)
On Tue, 27 Sep 2005, Roberto Nibali wrote:
> To cut the long story short, one use case is to be able to find out the max
> keepalive requests setting on a webserver by fetching a URL with little
> content using HTTP/1.1 requests. The simplest approach seemed to be watching
> the the difference between CURLINFO_NUM_CONNECTS and
> CURLINFO_REDIRECT_COUNT. Once this difference would exceed 0, it means we
> had to initiate a new connection. Of course the initial connection should be
> skipped in this loop.
I don't understand why you need to involve the redirect count. Can't you just
read the CURLINFO_NUM_CONNECTS and as long as it returns zero you know that it
re-used the previous connection?
> The problem is that my approach seems to have an off by 2 error which I
> can't explain and was hoping to get some pointers by you folks.
...
> What did I miss?
I don't see how it is off by two. I set MAX_LOOP to 5, and run it on a URL of
mine and then it says it was 3 connections while I could clearly see how all
requests re-used the same one. Thus it was off by 4. Increase MAX_LOOP makes
it even more off.
I don't see how your program can do what you describe you want it to do.
> And would it be intelligent to have more information on the socket state
> reported back through curl_easy_getinfo(), such as an n-tuple of socket
> addr/peer addr/port/peer port/state/... ?
Yes, adding ability to extract further information would indeed be usable and
I wouldn't mind adding such features, should someone write a patch for it.
> Another problem I've encountered is that if you start a couple of hundreds
> of parallel test and you get close to resource starvation libcurl kind of
> seems to have issues with signal handling.
What kind of resources would that starve out?
[...]
> I just wanted to note it in case this is an old issue.
I don't recognize it. And as you say, problems that requires a large amount of
threads and transfers are really hard to repeat and even harder to fix!
-- Commercial curl and libcurl Technical Support: http://haxx.se/curl.htmlReceived on 2005-09-27