Re: HTTP PUT and chunked transfer
Date: Tue, 21 Dec 2004 22:55:42 +0000
On Tue, Dec 21, 2004 at 09:15:52PM +0100, Daniel Stenberg wrote:
> What version are you using?
aharth_at_deri-swc01:~$ curl --version
curl 7.12.2 (i386-pc-linux-gnu) libcurl/7.12.2 OpenSSL/0.9.7e zlib/1.2.2 libidn/0.5.2
Protocols: ftp gopher telnet dict ldap http file https ftps
Features: IDN IPv6 Largefile NTLM SSL libz
> I just tried this:
> curl -T log/file218 -H "Transfer-Encoding: chunked" [URL]
> ... and it gets sent chunked just fine. Of course, it seems rather silly to
> send this kind of data chunked when the file size is known... :-)
Is there a way to specify the size of the chunks? I'd like to
commit the data that is sent/received to the database every say 10 MB.
The problem is that if I sent a 400 MB file and don't commit to disk
once in a while during transmission I get OutOfMemory errors.
Is that somehow possible? Does this make sense at all? I could
hard-code the commit interval at the servlet but it would be nicer
if I could specify the chunk size from the client.