Re: extract max keepalive requests configuration using a discovery loop
Date: Tue, 27 Sep 2005 18:15:10 +0200 (CEST)
On Tue, 27 Sep 2005, Roberto Nibali wrote:
> So my question is as follows: How do I get the amount of successful connects
> before libcurl was not able anymore to reuse (being forced by the webserver
> through the max'd out MaxKeepAliveRequests number) the handle and had to
> re-initiate a connection?
CURLINFO_NUM_CONNECTS returns the number of new connects that were made. 0
means it re-used an existing connection.
CURLINFO_REDIRECT_COUNT returns the number of redirects.
I believe the description you quoted meant that if a followed-redirection is
made, you can get a larger number of new connects (depending on how many
redirects and if they manage to re-use connections or not).
> Or in other words, if on a webserver we have following settings on an
> apache 2.0.x server:
> MaxKeepAliveRequests 13
> KeepAlive On
> KeepAliveTimeout 15
> How do I get the MaxKeepAliveRequests by probing using my approach?
I'm not sure what those apache config entries tell. Is MaxKeepAliveRequests
the maximum number of consecutive requests allowed on the same connection? If
so, you could use CURLINFO_NUM_CONNECTS and figure out when it no longer
returns 0 as then a new connect was made. Should then be after the 13th
request. (Assuming my initial assumption is correct.)
>> I don't see how it is off by two. I set MAX_LOOP to 5, and run it on a
>> URL of mine and then it says it was 3 connections while I could clearly
>> see how all requests re-used the same one. Thus it was off by 4.
> ?? off by 2, I'd say. since you did 5 connections and the program said
> it did make 3.
It used one connection doing 5 requests. It said '3' yes, but I that's because
you display loop - 2, not because it signify anything.
Unless I missed something of course.
>> Increase MAX_LOOP makes it even more off.
> Well, try setting MAX_LOOP to MaxKeepAliveRequests + 1 and re-run your
Why? Can't you succssfully do this the way I suggest?
>> Yes, adding ability to extract further information would indeed be
>> usable and I wouldn't mind adding such features, should someone write a
>> patch for it.
> Hmm, maybe in a spare minute I could dive into the libcurl sources but so
> far I was extremely happy that the implementation and documentation didn't
> require me to do so.
I understand that, but someone has to add the features... :-)
-- Commercial curl and libcurl Technical Support: http://haxx.se/curl.htmlReceived on 2005-09-27