cURL / Mailing Lists / curl-users / Single Mail

curl-users

Re: HTTP PUT and chunked transfer

From: Andreas Harth <andreas.harth_at_deri.org>
Date: Wed, 22 Dec 2004 18:33:24 +0000

Hi Daniel,

On Wed, Dec 22, 2004 at 10:26:58AM +0100, Daniel Stenberg wrote:
> On Tue, 21 Dec 2004, Andreas Harth wrote:
>
> >aharth_at_deri-swc01:~$ curl --version
>
> >curl 7.12.2 (i386-pc-linux-gnu) libcurl/7.12.2 OpenSSL/0.9.7e zlib/1.2.2
> >libidn/0.5.2
>
> And this fails when you try the command line I used?

not sure, I think it works but I haven't had the chance to verify
it at the server end.

> >I'd like to commit the data that is sent/received to the database every
> >say 10 MB. The problem is that if I sent a 400 MB file and don't commit to
> >disk once in a while during transmission I get OutOfMemory errors.
>
> 1. Why do you need chunked encoding for this? Can't you just save data
> every X
> bytes received?
>
> 2. Receiving huge posts in memory only seems a bit naive!
>
> 3. You can be sure that libcurl will never send chunks bigger than 16KB, as
> that is the maximum size of its internal buffer used for upload and that
> is
> then the maximum chunk size it uses. Unless we change the buffer size in
> a
> future release of course. I don't want to promise that we won't ever do
> that.

You're right with that. What I wanted to do is implement some sort
of transaction support similar to relational databases (either the
file gets added completely or not at all). However, my database
implementation only supports transactions in memory, therefore I was
looking for a workaround. But the right solution is to implement
transaction support (on-disk) with my database, I can see that now
more clearly.

Thanks for your comments, helped me a lot!

Regards,
Andreas.

-- 
http://sw.deri.org/~aharth/
Received on 2004-12-22