Buy commercial curl support. We
help you work out your issues, debug your libcurl applications, use the API,
port to new platforms, add new features and more. With a team lead by the
curl founder Daniel himself.
Feature request: Support "--continue-at -" together with --retry when writing to stdout
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Jörn Heissler via curl-users <curl-users_at_lists.haxx.se>
Date: Fri, 17 Apr 2026 12:40:47 +0200
Hello,
I'm trying to download a large file from https with curl and pipe it into
another program.
That transfer might get interrupted (e.g. intermittent network errors or
server restart), so it would be nice if curl could continue where it was
interrupted. That's what the "-C -" flag usually does.
But it doesn't appear to work when writing to stdout instead of a file.
With the naive approach
curl -C - --retry 5 --retry-all-errors https://example.net/huge.img | program
the download is restarted from the beginning of the file on each
interruption, so the result will contain duplicate data.
The curl manpage actually warns about this:
> We strongly suggest you do not parse or record output via redirect in
> combination with this option, since you may receive duplicate data.
What I desire works with "wget -c -O - URL | program" and I don't see
any good technical reason why curl couldn't implement it.
Curl knows how many bytes it already received in previous attempts and
it should set the "Range: bytes=XX-" header accordingly.
If the server does not support Range requests, it will probably send a
200 response (instead of 206 Partial Content) and restart from the
beginning. Curl could detect that and throw away the first XX bytes
before writing the "new" bytes to stdout.
N.b.: For testing on linux, a connection can be severed with:
ss -K dst 198.51.100.217 dport = :443
Thanks
Jörn
Date: Fri, 17 Apr 2026 12:40:47 +0200
Hello,
I'm trying to download a large file from https with curl and pipe it into
another program.
That transfer might get interrupted (e.g. intermittent network errors or
server restart), so it would be nice if curl could continue where it was
interrupted. That's what the "-C -" flag usually does.
But it doesn't appear to work when writing to stdout instead of a file.
With the naive approach
curl -C - --retry 5 --retry-all-errors https://example.net/huge.img | program
the download is restarted from the beginning of the file on each
interruption, so the result will contain duplicate data.
The curl manpage actually warns about this:
> We strongly suggest you do not parse or record output via redirect in
> combination with this option, since you may receive duplicate data.
What I desire works with "wget -c -O - URL | program" and I don't see
any good technical reason why curl couldn't implement it.
Curl knows how many bytes it already received in previous attempts and
it should set the "Range: bytes=XX-" header accordingly.
If the server does not support Range requests, it will probably send a
200 response (instead of 206 Partial Content) and restart from the
beginning. Curl could detect that and throw away the first XX bytes
before writing the "new" bytes to stdout.
N.b.: For testing on linux, a connection can be severed with:
ss -K dst 198.51.100.217 dport = :443
Thanks
Jörn
-- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-users Etiquette: https://curl.se/mail/etiquette.htmlReceived on 2026-04-17