cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: libCurl taking long time to download

From: Sandeep R M <sandeep.rm_at_gmail.com>
Date: Fri, 4 Sep 2009 13:52:02 -0400

Hi All,

Thanks all for the inputs, today morning while running some experiments, I
observed that httplib and urllib2 were actually fetching data from the proxy
cache. This was the reason why libCurl appeared slower as it always fetched
from the web-server. The latest figures are,

Python urllib2 - real - 1m 3.37 s, usr - 0m 0.22s, 0m 0.13s
Python httplib - real - 1m 39.56s, usr 0m 0.16s, sys 0m 0.10s
wget - real - 1m 10.39 s, usr - 0m 0.01s, 0m 0.13s libCurl - real - 1m
17.37s, usr 0m 0.04s, sys 0m 0.10s

Once again thanks for your suggestions.

Warm Regards,
Sandeep.
On Thu, Sep 3, 2009 at 7:48 PM, <johansen_at_sun.com> wrote:

> On Thu, Sep 03, 2009 at 04:53:18PM -0400, Sandeep R M wrote:
> > Hi Daniel,
> >
> > I was just trying to see which is the best option available, so I had to
> > conduct this test.
> >
> > Solaris version being used is,
> > SunOS test 5.10 Generic_127111-09 sun4u sparc SUNW,SPARC-Enterprise
> > I am using curl version 7.19.0 (we only have this version available
> > currently).
>
> I get nearly identical times for curl and wget, with curl using less sys
> time:
>
> curl
> ----
>
> real 13.384214331
> user 0.032008191
> sys 0.050414150
>
> wget
> ----
>
> real 13.108820703
> user 0.029270086
> sys 0.118620616
>
> Given your high real times compared to the low user/sys time, it seems
> like your application is probably spending a lot of time blocking.
> Since you're on Solaris 10, you can look at this with DTrace.
>
> On my system the output looks like this:
>
> genunix`cv_timedwait_sig_hires+0x1db
> genunix`cv_timedwait_sig+0x32
> rpcmod`clnt_cots_kcallit+0x697
> nfs`nfs4_rfscall+0x418
> nfs`rfs4call+0xb6
> nfs`nfs4lookupvalidate_otw+0x267
> nfs`nfs4lookup+0x1f7
> nfs`nfs4_lookup+0xe3
> genunix`fop_lookup+0xed
> genunix`lookuppnvp+0x3a3
> genunix`lookuppnatcred+0x11b
> genunix`lookupnameatcred+0x98
> genunix`lookupnameat+0x69
> genunix`vn_openat+0x232
> genunix`copen+0x418
> genunix`open64+0x34
> unix`_sys_sysenter_post_swapgs+0x149
> 378570
>
> genunix`shuttle_resume+0x328
> doorfs`door_call+0x2b1
> doorfs`doorfs32+0x141
> unix`_sys_sysenter_post_swapgs+0x149
> 8213468
>
> genunix`cv_timedwait_sig_hires+0x1db
> genunix`cv_waituntil_sig+0xba
> genunix`poll_common+0x461
> genunix`pollsys+0xe4
> unix`_sys_sysenter_post_swapgs+0x149
> 13608004750
>
> The values are in nanoseconds, so the interesting stack here is the last
> one, which accounts for 13.6 seconds. In this case, I'm performing a
> cv_timedwait_sig in the kernel. Essentially, I'm waiting in poll for a
> socket to become readable. The door call is a call into NSCD for name
> resolution, and the nfs4 call is to my home directory to load the dtrace
> script.
>
> I'm attaching the D-script that I used as sleepers.d. In order to run
> this, I had dtrace priviliege, which in this case meant I was root.
>
> # dtrace -s sleepers.d -c "<curl command>"
>
> Hope this helps.
>
> -j
>
Received on 2009-09-04