Buy commercial curl support from WolfSSL. We help you work
out your issues, debug your libcurl applications, use the API, port to new
platforms, add new features and more. With a team lead by the curl founder
himself.
Fwd: libcurl read-like interface
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: XSLT2.0 via curl-library <curl-library_at_cool.haxx.se>
Date: Sat, 26 Dec 2020 23:53:21 +0100
"Loop inversion"
I have fixed fcurl for the read scenario (made a PR).
Made a basic fcurl_read/write_to_stdout program with fcurl.
(extract)
while(!fcurl_eof(fcurl)) {
sz = fcurl_read(buf, 1, BUFSIZE, fcurl);
if (sz > 0) {
wsz = write(STDOUT_FILENO, buf, sz);
if (wsz != sz) {
fprintf(stderr, "write() %ld bytes failed\n", (long)sz);
return 1;
}
}
}
With http(s) 1.1 the maximum memory allocated by the close loop
(transfer/callback in fcurl) seems very reasonable on all the tests I
have done -a few kilobytes-, hence this is close enough to loop
inversion, even if libcurl is still technically owning the transfer loop.
I can't explain why in http/2 it is very unreasonable!
./fcurl_transfer
https://www.youtube.com/s/player/5dd3f3b2/player_ias.vflset/en_US/base.js
>/dev/null
+ content-length: 1556389
+ maxalloc=1184200
So, basically, from a 1.5MB file, we load 1.2MB in memory!
With Valgrind I even hit 100% in memory...
Sorry I could not find bigger files hosted on http/2, if you have some,
I'll test.
An explanation to that?
Cheers
Alain
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.se/mail/etiquette.html
Received on 2020-12-26
Date: Sat, 26 Dec 2020 23:53:21 +0100
"Loop inversion"
I have fixed fcurl for the read scenario (made a PR).
Made a basic fcurl_read/write_to_stdout program with fcurl.
(extract)
while(!fcurl_eof(fcurl)) {
sz = fcurl_read(buf, 1, BUFSIZE, fcurl);
if (sz > 0) {
wsz = write(STDOUT_FILENO, buf, sz);
if (wsz != sz) {
fprintf(stderr, "write() %ld bytes failed\n", (long)sz);
return 1;
}
}
}
With http(s) 1.1 the maximum memory allocated by the close loop
(transfer/callback in fcurl) seems very reasonable on all the tests I
have done -a few kilobytes-, hence this is close enough to loop
inversion, even if libcurl is still technically owning the transfer loop.
I can't explain why in http/2 it is very unreasonable!
./fcurl_transfer
https://www.youtube.com/s/player/5dd3f3b2/player_ias.vflset/en_US/base.js
>/dev/null
+ content-length: 1556389
+ maxalloc=1184200
So, basically, from a 1.5MB file, we load 1.2MB in memory!
With Valgrind I even hit 100% in memory...
Sorry I could not find bigger files hosted on http/2, if you have some,
I'll test.
An explanation to that?
Cheers
Alain
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.se/mail/etiquette.html
Received on 2020-12-26