Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CURLOPT_BUFFERSIZE: support enlarging receive buffer #1222

Closed
wants to merge 2 commits into from

Conversation

richykim
Copy link
Contributor

Replace use of fixed macro BUFSIZE to define the size of the receive buffer.
Reappropriate CURLOPT_BUFFERSIZE to allow enlarging receive buffer size.
Upon setting, resize buffer if larger than the current default size up to a
MAX_BUFSIZE (512KB). This can benefit protocols like SFTP.

Replace use of fixed macro BUFSIZE to define the size of the receive buffer.
Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive buffer size.
Upon setting, resize buffer if larger than the current default size up to a
MAX_BUFSIZE (512KB). This can benefit protocols like SFTP.
Copy link
Member

@bagder bagder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The warnings is what causes the CI failures.

gets called more often and with smaller chunks. Secondly, for some protocols,
there's a benefit of having a larger buffer for performance. This is just
treated as a request, not an order. You cannot be guaranteed to actually get
the given size.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add here that growing the buffer was added in 7.53.0?

@@ -1309,6 +1309,7 @@ static CURLcode telnet_do(struct connectdata *conn, bool *done)
struct timeval now;
bool keepon = TRUE;
char *buf = data->state.buffer;
const size_t buf_size = CURL_BUFSIZE(data->set.buffer_size);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compiler warning:

telnet.c:1312:16: warning: unused variable 'buf_size' [-Wunused-variable]

  const size_t buf_size = CURL_BUFSIZE(data->set.buffer_size);

@@ -297,7 +297,7 @@ static CURLcode http_output_basic(struct connectdata *conn, bool proxy)
pwd = conn->passwd;
}

snprintf(data->state.buffer, sizeof(data->state.buffer), "%s:%s", user, pwd);
snprintf(data->state.buffer, CURL_BUFSIZE(data->set.buffer_size), "%s:%s", user, pwd);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checksrc warning:

./http.c:300:89: warning: Longer than 79 columns (LONGLINE)

   snprintf(data->state.buffer, CURL_BUFSIZE(data->set.buffer_size), "%s:%s", user, pwd);


/* Resize only if larger than default buffer size. */
if(data->set.buffer_size > BUFSIZE) {
data->state.buffer = realloc(data->state.buffer, data->set.buffer_size + 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checksrc warning:

./url.c:2285:83: warning: Longer than 79 columns (LONGLINE)

       data->state.buffer = realloc(data->state.buffer, data->set.buffer_size + 1);

(data->set.buffer_size < 1))
data->set.buffer_size = 0; /* huge internal default */
data->set.buffer_size = MAX_BUFSIZE; /* huge internal default */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it weird to get 512KB if you ask for < 1 ? Wouldn't the default size be more reasonable then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, default size is reasonable.

@@ -201,6 +201,8 @@
/* Download buffer size, keep it fairly big for speed reasons */
#undef BUFSIZE
#define BUFSIZE CURL_MAX_WRITE_SIZE
#define MAX_BUFSIZE 524288
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have any thoughts on this specific limit? Why 512KB? For SFTP we've seen that even larger buffers help, mostly since we don't do any buffer sliding in the code so libcurl has to drain the entire buffer first before it can refill it.

Also, the max limit is not documented.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took 512KB as reference from https://daniel.haxx.se/blog/2014/05/14/why-sftp-is-still-slow-in-curl/
And in terms of adhoc testing that's where the performance was equivalent to sftp on my link.
But of course, I can change it to something bigger.

Should I add the max as CURL_MAX_READ_SIZE define here?
include/curl/curl.h
docs/libcurl/symbols-in-versions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried 1MB buffer, and anecdotally I think it's diminishing returns.

16KB -> 0.6MB/s
512KB -> 9.4MB/s
1024KB -> 9.9MB/s

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, what was the RTT to the server in this test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~55ms

Copy link
Member

@bagder bagder Jan 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each SFTP packet needs to get requested individually, and sent back. That means that to fill up 512KB it needs 55 milliseconds (if we rule out a lot of other complications). It certainly can't do it any faster at least. (Because of how curl always drains the entire buffer first before it asks for more data.)

MAX = (512*1024) / 0.055 = 9532509 bytes/sec

That seems to match your numbers for the 512KB buffer case, but since you can't go faster with a larger buffer, it might indicate that limit is then not in the client SFTP code and buffer anymore.

But if we say talk with a more remote server at for example 200ms RTT, the max speed with 512KB of data buffer is:

MAX = (512 * 1024)/0.200 = 2621440 bytes/sec

While a doubled buffer size should be able to double the speed:

MAX = (2* 512 * 1024)/0.200 = 5242880/sec

- add CURL_MAX_READ_SIZE to defined maximum receive buffer size
- update docs
- clean-up long lines
@jay
Copy link
Member

jay commented Jan 19, 2017

Could a large default buffer size cause a problem with embedded devices?

@richykim
Copy link
Contributor Author

I'm sure a large default buffer could be adverse for certain platforms. However, this change doesn't change that; only gives the option of growing the receive buffer via CURLOPT_BUFFERSIZE.

Copy link
Member

@bagder bagder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@bagder bagder closed this in 6b76166 Jan 19, 2017
@bagder
Copy link
Member

bagder commented Jan 19, 2017

Thanks a lot!

bagder pushed a commit that referenced this pull request Jan 19, 2017
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer.  Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size.  Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.

Closes #1222
peterpih pushed a commit to railsnewbie257/curl that referenced this pull request Jan 24, 2017
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer.  Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size.  Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.

Closes curl#1222
@lock lock bot locked as resolved and limited conversation to collaborators Jan 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants