cURL / Mailing Lists / curl-users / Single Mail

curl-users

RE: curl 7.9.7 pre-release 1

From: Daniel Stenberg <daniel_at_haxx.se>
Date: Tue, 7 May 2002 08:39:48 +0200 (MET DST)

On Mon, 6 May 2002, Roth, Kevin P. wrote:

> Thanks for the new features. I especially like the new --trace option!

Thanks for testing them out. I clearly failed to do so myself! I should add a
couple of test cases for this purpose...

> I found that the --trace option doesn't seem to work properly in
> combination with -o, -O and -v. Perhaps others too, but these were the
> issues I stumbled on. My test URL (for what it's worth) is
> "http://mweb-test:8099/AdBlocker.js".

Actually, anything that made curl use stderr crashed this way. Pretty silly,
but there was a forgotten 'break' in a libcurl switch(). This was the -o and
-O problem.

The -v problem was because of some minor silliness. -v doesn't work together
with --trace, as --trace replaces the -v functionality. Therefor, an extra -v
disabled parts of the --trace stuff. Not anymore though.

I'll make a pre-release 2 available as soon as I've done some more
adjustments.

> $ curl --trace - URL
> >>> I would have guessed that the trace output in this command would go
> >>> to stdout, instead it goes to a file named "-" (a bit hard to `cat`
> >>> or `vi`...). I realize this is probably intentional behavior, but
> >>> thought I'd pass the comment along in case you overlooked something.

Good point. I'll make it go to stdout if - is given, in the same style as
many other options already work. I just didn't think of it.

> A comment on the format of --trace: any chance you could add the ability to
> "turn off" the hex character output in the trace file? I like the rest of
> the format, but find that 16 characters per line can be hard to read. I
> would love to see an option to place 64 characters across, with no hex
> translation, as in:
>
> == Info: About to connect() to mweb-test:8099
> == Info: Connected to mweb-test (89.2.42.216) port 8099
> => Send header 219 (0xdb) bytes
> 0000: HEAD /AdBlocker.js HTTP/1.1..User-Agent: curl/7.9.7-pre1 (i686-p
> 0040: c-cygwin) libcurl 7.9.7-pre1 (OpenSSL 0.9.6c)..Host: mweb-test:8
> ...

I added the hex codes there since the data may very well be binary, and in
the above output you also show this slightly, as the CR and LF characters are
replaced with dots and thus impossible to detect exactly what was sent.

But of course, I agree with you about this being a slightly friendlier output
that is easier to read. Especially when dealing with ASCII or at least very
little binary data.

What about --trace that is like now, and a --trace-ascii that works as you
suggest? Or possibly the revered, a --trace-binary that works like I made it
and having --trace do like you suggest?

> Also - since you did the nice job of separating the received headers into
> individual chunks, shouldn't the sent headers also be separated? Or else,
> shouldn't none of them be separated?

This question is more complicated than what it looks like at a first glance.

I actually didn't want to split up the received data into individual headers,
I would prefer to pass all headers in one chunk. The reason I had to do it
this way, is that libcurl parses the headers one line at a time and has no
idea about where the headers end until it has parsed the last header. When
libcurl receive a large chunk it may start with N headers followed by Z bytes
of body. This forced me to deliver the incoming headers to the DEBUGFUNCTION
this way.

On the other hand, when we send headers in a HTTP request, there is no
functionality within libcurl that separates the whole request chunk into
separate headers and thus the DEBUGFUNCTION will instead get the same chunk
of data that was successfully send to the peer (which indeed does not have to
be all headers or even complete headers).

> I tested the fix for dumping multiple headers (-D) on one call to curl.exe.
> It now works as expected.

Goodie.

-- 
    Daniel Stenberg -- curl groks URLs -- http://curl.haxx.se/
_______________________________________________________________
Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: bandwidth_at_sourceforge.net
Received on 2002-05-07