cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: debugging a crash in Curl_pgrsTime/checkPendPipeline?

From: <johansen_at_sun.com>
Date: Thu, 6 Aug 2009 15:22:52 -0700

On Mon, Aug 03, 2009 at 01:10:02PM -0700, johansen_at_sun.com wrote:
> On Mon, Aug 03, 2009 at 02:45:35PM +0200, Daniel Stenberg wrote:
> > On Wed, 29 Jul 2009, johansen_at_sun.com wrote:
> >
> >>> I'm a bit reluctant to add another list since I think we should be
> >>> able to make the existing logic work with the existing lists. I get
> >>> the feeling adding another list is just a way to escape that fixing
> >>> and instead try to engineer another way to solve the same problem.
> >>
> >> I have prototyped a fix with a 4th list that seems to work. I'm open
> >> to alternative suggestions, but I'll need details. I've been able to
> >> identify what's causing the bug, but I don't have any knowledge about
> >> how past design decisions affected the current state of the code base.
> >
> > Please just go ahead and post it here so that we can have a look and
> > discuss the actual suggestion!
>
> I would have posted this sooner, but I was still testing the fix.
>
> I'm attaching a patch for the problem. I'm also including a URL to a
> webrev. It's a multi-format web-browsable diff that we use for a lot of
> our code reviews in OpenSolaris. I find the sdiffs and wdiffs most
> useful, but everyone has their own favorite.
>
> http://cr.opensolaris.org/~johansen/curl-multibug/

I think I've found another bug in Curl_do, but I need some explanation
about why it is the way that it is.

In the patch that I sent out, I found a case where Curl_do called
Curl_connect after a connection was lost, but failed to restore the
pipelines that multi_runsingle needs to operate correctly. Once I had
that problem solved, we seem to be running into a problem with
downloading files. In the last case I caught, the file was supposed to
be 28286 bytes, and get_info reports its size as so. Unfortunately,
stat(2), shows the file as being 51167 bytes.

It looks like when the re-connect happens the client never bothers to
set the file pointer back to the head of the file. In fact, it appears
that there really isn't any code in libcurl that is set up to do this.

Is it possible to nuke the portion of Curl_do that attempts reconnects
entirely, or is it serving some other purpose? It looks like it's just
causing bugs for users of the pipeline'd multi-interface.

An alternative approach would be to seek/truncate the file pointer back
to zero, but I don't understand why we even bother to reconnect in this
case. Shouldn't Curl_do just return CURLE_SEND_ERROR to the caller and
allow them to try again if they want?

-j

-- 
Krister Johansen | Solaris Kernel Performance | krister.johansen_at_sun.com
Received on 2009-08-07