curl-library
Re: Info request about the zero copy interface
Date: Thu, 01 Dec 2005 18:16:39 +0100
Daniel Stenberg ha scritto:
> On Thu, 1 Dec 2005, Legolas wrote:
>
>>> "every" ? What other operations than read or write do you think it
>>> needs?
>>>
>> I don't understand your question, however passing libcurl an object
>> with assigned read/write methods (and others if needed) will avoid
>> the file writing or reading memory copy bottleneck (data would not be
>> cached into memory and passed to libcurl, but directly read/written
>> from libcurl itself through the read/write method).
>
>
> No, that is _not_ what a zero copy interface does as it does not avoid
> the extra copy. We already have that kind of callbacks, and they are
> call read and write callbacks.
I am saying to overwrite those callbacks, i.e. _not_ to use them but
using instead a custom made object (the one I was talking before) which
creates the correspondant callbacks for files or memory.
>
> To me, it looks like you are too focused on squeezing your own ideas
> into libcurl than to actually provide a usable zero copy interface.
To you. And I am afraid you have misunderstood. I am wondering why are
you taking a ridiculizing tone with me. If you have no time or if you
think answering me is not worth, simply don't. You should have asked me
only to better explain but looks like you do don't care about that.
However my aim was just to give suggestions and to receive, I don't
think I have been arrogant in my replies.
>
>> 1) Example of possible libcurl source snippet (WRITE operation, maybe
>> a download):
>
>
> You say we call the write callback. In what way does this write
> callback differ from the write callback libcurl already features?
It differs because a write callback of a file-type object would contain
directly 'fwrite' calls. Isn't that clear? A file-type object has
DIFFERENT read and write callbacks from those owned by a memory-type
object. The object I was talking about in the previous replies SETS them.
>
> Your write callback seems to take a pointer to a buffer as one of its
> arguments. What buffer? Wouldn't libcurl need to store data in that
> when it receives data from a peer? How would the write callback get
> and use the data without copying it?
libcurl wouldn't care how the write callback uses the data present into
'socket_buffer_recv', which is in my scheme the *real* socket buffer
used for input by any recv-like call done by libcurl and it is passed to
the callback.
The write callback may dump the content to a file or copy it into a
bigger memory stream (enlarging it if necessary). This may not be safe
for caching-purposes, I know...however my idea was to integrate the
RWops object into libcurl to prevent this.
Perhaps the internal working of the object is not clear, give a look to
the following snippet:
- - - - - - - - - - - - -
typedef struct RWops {
...
void *buffer;
int buffer_size;
int (*read)(struct RWops *context, void *ptr, int size, int
maxnum);
int (*write)(struct RWops *context, void *ptr, int size, int
maxnum);
...
int (*_assert_buffer_size)(struct RWops *context, int
desired_buffer_size);
...
} RWops;
- - - - - - - - - - - -
>
>> I think this is really the best way to create an actual zero copy
>> interface, in case (1, WRITE) the eventual subsequent buffers needed
>> would be handled automatically by the RWops object.
>
>
> How?
>
The write callback reallocates the memory when the next block writing
operation is going to overflow. So it works a dynamic stream.
Received on 2005-12-01