(no subject)

From: <>
Date: Mon, 15 Dec 2008 13:42:45 -0500 (EST)

>> With a slightly-modified set to 20 concurrent
>> connections retrieving the same 100KB file, it generates slightly
>> over 5 MEGABITS per second, and the CPU is maxed out.
> That sounds terribly wrong.

Turns out that you're quite right.

I just ran 3 tests comparing Perl LWP to Python with urllib2 and Python
with PyCurl. 10,000 iterations of the simplest possible HTTP GET of a
1-byte file from I did my best to remove all overhead except
for the for loop and the GETs.

Perl: 15.4 seconds
urllib2: 8.9 seconds
pycurl: 1.3 seconds

It seems obvious (now) that pycurl is much faster.

As to why my original test was slow - I have no idea. I copied the example from the PyCurl page
and edited it so it wouldn't print the retrieved page to STDOUT.

I really don't know Curl or Python enough to figure out why. At least not
now - I will be digging into it as part of this project.
Received on 2008-12-15