curl-library
Re: FTP large file support patch
Date: Thu, 11 Dec 2003 17:27:00 -0800 (PST)
> > Here's the new patch.
>
> I see the same problem with it that the original patch from a few months ago
> had as well: the printf/scanf type format codes will break on systems
> with a 32-bit off_t. The fix for this last time went something like this:
>
> ...
Oops. That's right -- plus, some compilers might not even be very happy
with the %lld conversion for long longs, either. Or, at least, I've heard
rumor of such beasts. At any rate, I've done as suggested, defining an
OFF_T_FMT based on the size of off_t (or, rather, the setting of
_FILE_OFFSET_BITS). The test to pick the correct format might need
modification, especially if there's a better way to know about the size of
off_t's based on something that comes out of the configure script, or if
there's a way to know about the presence of printf/scanf flags, but it
works for me as it stands right now.
> I can see the value of a curl API that doesn't depend on floating point at
> all in some small systems that lack hardware floating point support.
A'ighty, point taken. I've added the PROGRESSFUNCTION_BIG key and its
related stuff. I needed to add an additional callback field to the
UserDefined section, though, since the two callbacks have different
signatures. Right now, you're allowed to define both, in which case the
non-_BIG version gets called in preference to the _BIG version. I don't
really feel like that's necessarily the right way to do things, though.
Another caveat with this addition is that the data for the progress
monitoring is still being stored as double values. I was tempted to
change those to off_t's, but then I realized that might cause trouble.
If the progress is displayed globally across all files being downloaded,
for instance, and someone downloads several files whose combined sizes
exceed the 32-bit range on a platform where off_t is only 32 bits, then
progress monitoring would fail.
It seems like the only way to be certain that one is storing more or less
the right numbers is to use doubles or long longs. Maybe the function
should be changed to take long longs instead of off_t's? It seems
unlikely that anyone would attempt a single curl to download enough data
to exceed the size of a signed long long (8+ exabytes!!)... But then, I'm
not sure if all platforms *have* a long long value (in particular, I'd be
concerned on small systems that didn't have floating point support, which
was one of the stated reasons for having this).
At any rate, the callback has been defined to take off_t's, and the data
for the progress counting is being appropriately cast back out of a
double, so registering such callbacks should work properly up to the size
of the off_t, although they may suffer some loss of precision.
I've included the new patch. As the list doesn't really like messages
beyond 40k, I've gzipped it this time around. Should have been doing that
from the start. :)
Thanks,
Dave
-------------------------------------------------------
This SF.net email is sponsored by: IBM Linux Tutorials.
Become an expert in LINUX or just sharpen your skills. Sign up for IBM's
Free Linux Tutorials. Learn everything from the bash shell to sys admin.
Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click
- APPLICATION/x-gzip attachment: ftp_large_file-3.diff.gz