curl / Mailing Lists / curl-library / Single Mail
Buy commercial curl support from WolfSSL. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder himself.

Re: relative performance

From: Rich Gray via curl-library <curl-library_at_lists.haxx.se>
Date: Thu, 2 Sep 2021 14:31:13 -0400

Ben Greear via curl-library wrote:
> On 8/24/21 8:20 AM, Daniel Stenberg via curl-library wrote:
>> Hi all,
>>
>> For a long time I've wanted to get something done that allows us to
>> compare curl's relative performance. Ideally something we can run every
>> once in a while to compare that nothing major has turned sour without us
>> being aware of it.
>>
>> A first step would be a tool we can run that measures "relative
>> performance". Like doing N transfers of size X and measure how fast it can
>> complete them. Running the same tool on the same host with the same server
>> but built to use different libcurl versions should then not get noticably
>> worse speeds over time. (Barring the difficulty of measuring network
>> things when other programs are also running on the test host.)
>>
>> I'm not sure exactly how to do this, but I have a first shot at such a
>> tool written and I figured we can create a new repository for this
>> (curl/relative I'm thinking) and perhaps add more smaller tools for
>> various tests as we advance. Then work out how to actually run them with
>> different/current libcurls.
>>
>> Thoughts?
>>
>
> What is your network-under-test in this case?
>
> And, if you want a network emulator (and don't want to mess with
> netem), contact me off list, I'll give you free software licenses
> for our product.  Our rig can easily bundle a web server since a (virtual)
> routed network too, so it should be a pretty complete test rig for this
> case if you wish...
>
> Thanks,
> Ben
>

Daniel, this is a generous offer you should probably accept, unless you
already have such a tool. Performance issues aside, the ability to make a
slow/ratty network will force all sorts of edge cases in the face of delays
and timeouts could be of considerable value. I found all sorts of bugs in
some SNMP code I was working on when I started inserting delays, dropping
and duplicating packets, etc. While higher level protocols will hide a lot
of packet level crud, I still would expect emulating a glitchy network could
drive curl through some of curl's less frequented code.

In general, I don't know if it makes sense to do performance testing against
internet servers without running multiple passes to normalize any anomalies
which are happening along the uncontrollable connection path. A local setup
with the ability to control speed, congestion, dropped/duplicated packets,
timeouts, failed connections, etc. is more real world than just make it go
as fast as possible with the fastest connection available. (Tests should be
repeatable.) Of course, how's it do flat out is useful too.

Cheers!
Rich
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html
Received on 2021-09-02