curl-library
extract max keepalive requests configuration using a discovery loop
Date: Tue, 27 Sep 2005 12:01:27 +0200
Hello,
I've been using the excellent curl library for a customized performance
test and was able to write such a test conduct in 2 days for a rather
complex e-commerce website. Thanks to all authors and contributers for
such wonderfully designed and well implemented library; it's a joy to
work with libcurl. Damn, it's even valgrind-safe ;).
Since my test tool worked better than any commercial Windows based web
crawler tool they deployed I was given the task to further extend it to
support selective use cases.
To cut the long story short, one use case is to be able to find out the
max keepalive requests setting on a webserver by fetching a URL with
little content using HTTP/1.1 requests. The simplest approach seemed to
be watching the the difference between CURLINFO_NUM_CONNECTS and
CURLINFO_REDIRECT_COUNT. Once this difference would exceed 0, it means
we had to initiate a new connection. Of course the initial connection
should be skipped in this loop.
The problem is that my approach seems to have an off by 2 error which I
can't explain and was hoping to get some pointers by you folks. Here is
the respective code snippet which should indicate the problem zone (I've
omited the error handling for ledgibiliy reasons and the curl handle is
assigned in the main part of the program):
void perftest_rundisc(CURL *curl) {
CURLcode res;
unsigned int loop;
unsigned int new_conn;
unsigned int skip;
long con;
long red;
res = curl_easy_setopt(curl, CURLOPT_URL, "http://www.somesite.com/");
new_conn = 0;
loop = 0;
skip = 1;
while (new_conn == 0 && loop < MAX_LOOPS) {
res = curl_easy_perform(curl);
res = curl_easy_getinfo(curl, CURLINFO_NUM_CONNECTS, &con);
res = curl_easy_getinfo(curl, CURLINFO_REDIRECT_COUNT, &red);
if ((con - red) > 0) {
if (skip == 1) {
/* the first connections of the socket */
skip = 0;
} else {
/* A new socket has been opened */
new_conn = 1;
}
}
if (new_conn == 0) {
loop_sleep(); /* sleeps for 100 ms using nanosleep() */
}
loop++;
if (debug > 0) {
fprintf(stderr, "Connections done: %d\n", loop);
}
}
/*
For reasons unknown to me we need to subtract 2 from the loop
to get the correct amount of HTTP/1.1 request through one socket
before we hit the keepalive setting of the web server.
*/
printf("Amount of connections through same socket: %d\n", loop - 2);
}
What did I miss? And would it be intelligent to have more information on
the socket state reported back through curl_easy_getinfo(), such as an
n-tuple of socket addr/peer addr/port/peer port/state/... ?
Another problem I've encountered is that if you start a couple of
hundreds of parallel test and you get close to resource starvation
libcurl kind of seems to have issues with signal handling. The symptoms
are that on both sides (client and server) you have not the same amount
of sockets in state ESTABLISHED. This is of course not possible normally
unless something hangs and when I straced the processes which had
sockets in ESTABLISHED state on the client but not on the server I found
that libcurl was looping in a select statement. Unfortunately the
"defect" is extremely difficult to reproduce. I just wanted to note it
in case this is an old issue. It also seems that when I changed my
sleep() calls to nanosleep() this problem did not occur as often anymore
as before.
Thanks for all pointers to my silly little problems,
Roberto Nibali, ratz
-- ------------------------------------------------------------- addr://Kasinostrasse 30, CH-5001 Aarau tel://++41 62 823 9355 http://www.terreactive.com fax://++41 62 823 9356 ------------------------------------------------------------- terreActive AG Wir sichern Ihren Erfolg -------------------------------------------------------------Received on 2005-09-27