curl-library
Re: Support for long-haul high-bandwidth links
Date: Thu, 10 Nov 2011 09:39:37 +0100 (CET)
On Wed, 9 Nov 2011, Andrew Daviel wrote:
> When transferring a large file over a high-latency (e.g. long physical
> distance) high-bandwidth link, the transfer time is dominated by the
> round-trip time for TCP handshakes.
This is not true. The RTT is not really a factor for a single uni-direction
stream in modern TCP since they introduced the ability to set large windows
(>16 bit size), many years ago. If you believe otherwise, then please provide
details.
RTT becomes an issue between TCP streams, like when doing HTTP and requesting
multiple files. That's why HTTP pipelining was invented.
> In the past tools such as bbftp have mitigated this effect by using multiple
> streams, but required both a special server and client.
Using multiple TCP streams for transfers is a way to overcome some bottle
necks in transfer technology. They typically involve ISP-side throttling per
stream, TCP slow start/nagle algorithm and a smallish initial congestion
window in TCP.
> Using the "range" header in HTTP/1.1, it is possible to start multiple
> simultaneous requests for different portions of a file using a standard
> Apache WebDAV server,
Using any compliant HTTP/1.1 server even! You'll find that not all HTTP/1.1
servers will honor that header though.
> and achieve a significant speedup for both GET and PUT.
The level of "significant" will vary greatly, as you will find out.
And for PUT it is not the same thing.
> I wondered if this was of interest as an enhancement for Curl.
libcurl already supports HTTP/1.1 fully and allows applications to use libcurl
to do many-connection transfers if they desire. Or are you talking about
adding the ability to the curl tool?
-- / daniel.haxx.se ------------------------------------------------------------------- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.htmlReceived on 2011-11-10