cURL / Mailing Lists / curl-library / Single Mail


Re: [PATCH] sockfilt: read full blocks also in juggle()

From: Kamil Dudka <>
Date: Thu, 29 Dec 2011 14:37:37 +0100

On Thursday 29 December 2011 13:44:01 Yang Tse wrote:
> 2011/12/29 Kamil Dudka wrote:
> >> Please verify if this commit alone improves/fixes
> >>
> >
> > No, it does not. FTPS tests are still skipped in the same way as before.
> Ok. When you say FTPS tests are skipped. I'm assuming that you mean
> that exactly and that these are skipped due to failure in FTPS test
> server verification stage.
> >> If e99128a5c9 does not improve the situation, logs from a manually run
> >> of " -k 100" and interrupting it should better pinpoint
> >
> > What should be the connection between test100 and FTPS?
> None ;-) That 100 should have been written 400.

Here are the logs without any of the patches being applied:

> >> where the problem comes from. I believe that making sockfilt.c break
> >> on new-line boundaries should not be required unless we have another
> >> problem somewhere else.
> >
> > How else would we know we have read enough from the socket?
> sockfilt has never known how much to read from the socket. It calls
> recv() (sread macro) with a big enough buffer getting whatever size of
> data recv() delivers.
> sockfilt operation should be fully independent of application data, as
> such its operation should be agnostic to any data it is handling.
> There should be no need to convert it to something that handles data
> based on new-lines or line oriented chunks, remember that it also
> handles binary data.
> Once that sockfilt is running and connected its operation is actually
> quite simple. It verifies if there is something to read on stdin and
> if there is something it reads stdin writing that to the socket.
> Additionally it verifies if there is something to read from the socket
> and if there is something it reads from the socket writing that to
> stdout.
> juggle() called in a loop does the above forever until a fatal error
> (stdin or stdout open2() pipes are broken or socket is disconnected
> from peer) happens and then sockfilt terminates.
> If for any reason it is not operating as described above then there's
> a bug in sockfilt.
> In its existing implementation it does stdin reads before socket
> reads. Maybe we should be doing socket reads before stdin reads when
> both are possible. Given that this is a single threaded process one of
> them has to be done before the other.

Then it depends on the size of chunks it gets, which is a fundamentally
broken assumption. My patch fixes it for NL-based protocols although it
does not fix the design itself.

> > Please explain what the issue with my original patch actually is.
> It introduces to sockfilt the concept that it is handling chunks of
> data which are NL terminated instead of arbitrary data.
> It changes function used to read from the socket from recv() (sread
> macro) to read() which introduces portability issues.

I spotted the mixing of read/sread in sockfilt.c and considered it another
bug. Are you implying it was intentionally?

> Somehow it works around whatever issue we have, but I'm not confident
> at all that if fixes the real issue and as such any other apparently
> unrelated modification may uncover the problem again.
> Can you point me out which library (libraries) source code I should
> download in order to build and be capable of repeating the issue?

This should be easy to repeat with any nss-3.13 unless you have the
NSS_SSL_CBC_RANDOM_IV environment variable set to zero, which would
suppresses the extraordinary chunking on client's side.

You can get the sources of nss here (but most of Linux distros have it
already packaged, so compiling it locally should not be necessary):

List admin:
Received on 2011-12-29