Buy commercial curl support from WolfSSL. We help you work
out your issues, debug your libcurl applications, use the API, port to new
platforms, add new features and more. With a team lead by the curl founder
himself.
Re: Epoll performance issues.
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: James Read via curl-library <curl-library_at_cool.haxx.se>
Date: Thu, 26 Nov 2020 20:04:53 +0000
Hi,
On Tue, Nov 24, 2020 at 7:50 PM James Read <jamesread5737_at_gmail.com> wrote:
> Hi,
>
> On Tue, Nov 24, 2020 at 5:37 PM Tomalak Geret'kal via curl-library <
> curl-library_at_cool.haxx.se> wrote:
>
>> On 23/11/2020 20:16, James Read via curl-library wrote:
>> > I have attempted to make two minimal codes that
>> > demonstrate my problem.
>> >
>> > The first can be
>> > downloaded from https://github.com/JamesRead5737/fast
>> > <https://github.com/JamesRead5737/fast>
>> > It basically recursively downloads http://www.google.com
>> > <http://www.google.com>, http://www.yahoo.com
>> > <http://www.yahoo.com> and http://www.bing.com
>> > <http://www.bing.com>
>> > I am able to achieve download speeds of up to 7Gbps with
>> > this simple program
>> >
>> > The second can be downloaded
>> > from https://github.com/JamesRead5737/slow
>> > <https://github.com/JamesRead5737/slow>
>> > The program extends the first program with an asynchronous
>> > DNS component and instead of recursively downloading the
>> > same URLs over and over again downloads from a list of
>> > URLs provided in the http001 file. Full instructions are
>> > in the README. What's troubling me is that this second
>> > version of the program only achieves average download
>> > speed of 16Mbps.
>> >
>> > I have no idea why this is happening. Shouldn't the second
>> > program run just as fast as the first?
>> >
>> > Any ideas what I'm doing wrong?
>>
>> That's a lot of code you're asking us to debug.
>>
>>
I don't know if this helps but I would be willing to become a silver
sponsor of the libcurl project if I can find a decent solution to this
problem. A solution to this problem means to me the difference between a
high performance web crawler and a low performance one so it means a lot to
me. I realise this is a hard question. Hopefully, some funding in the
direction of the project would make it a little easier.
James Read
>
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.se/mail/etiquette.html
Received on 2020-11-26
Date: Thu, 26 Nov 2020 20:04:53 +0000
Hi,
On Tue, Nov 24, 2020 at 7:50 PM James Read <jamesread5737_at_gmail.com> wrote:
> Hi,
>
> On Tue, Nov 24, 2020 at 5:37 PM Tomalak Geret'kal via curl-library <
> curl-library_at_cool.haxx.se> wrote:
>
>> On 23/11/2020 20:16, James Read via curl-library wrote:
>> > I have attempted to make two minimal codes that
>> > demonstrate my problem.
>> >
>> > The first can be
>> > downloaded from https://github.com/JamesRead5737/fast
>> > <https://github.com/JamesRead5737/fast>
>> > It basically recursively downloads http://www.google.com
>> > <http://www.google.com>, http://www.yahoo.com
>> > <http://www.yahoo.com> and http://www.bing.com
>> > <http://www.bing.com>
>> > I am able to achieve download speeds of up to 7Gbps with
>> > this simple program
>> >
>> > The second can be downloaded
>> > from https://github.com/JamesRead5737/slow
>> > <https://github.com/JamesRead5737/slow>
>> > The program extends the first program with an asynchronous
>> > DNS component and instead of recursively downloading the
>> > same URLs over and over again downloads from a list of
>> > URLs provided in the http001 file. Full instructions are
>> > in the README. What's troubling me is that this second
>> > version of the program only achieves average download
>> > speed of 16Mbps.
>> >
>> > I have no idea why this is happening. Shouldn't the second
>> > program run just as fast as the first?
>> >
>> > Any ideas what I'm doing wrong?
>>
>> That's a lot of code you're asking us to debug.
>>
>>
I don't know if this helps but I would be willing to become a silver
sponsor of the libcurl project if I can find a decent solution to this
problem. A solution to this problem means to me the difference between a
high performance web crawler and a low performance one so it means a lot to
me. I realise this is a hard question. Hopefully, some funding in the
direction of the project would make it a little easier.
James Read
>
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.se/mail/etiquette.html
Received on 2020-11-26