New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lib and tool: install large bit mask type when more than 32 protocols #9534
Conversation
Doesn't this need some precaution or checks for the case where |
I already had some thinking about it and ended up with the following points:
I came to the conclusion that adding several lines in configure and in header files just to generate another error message is not worth it.
For that, we should have an fd_set-like |
I don't see how we are married to the bitwise operators in the tool code. We could replace that without the same limitations. It doesn't even have to be lightning fast because we dont' do those checks very often. |
Also, I think you're mixing two completely separate changes in a single PR here. The tool change and library one seem unrelated. Right? |
I was rather and primary speaking about the library. Your mailing list answer https://curl.se/mail/lib-2022-09/0022.html seems to tell you don't want a non-scalar
Yes and no! They are unrelated in terms of code independence. They however are both needed to support >32 protocols. |
See, we're mixing things here now because the change is in both areas and they are very different territories. In the libcurl code I think we can stick to only do the new protocols if we are >32 bits - until someone actually shows up that has a system without long long. I suspect they are virtually extinct these days. |
Yes, I think so too Thus there is no need for a cumbersome test on
IMO, both quotes above are contradictory. |
No they are not. They are two separate ones:
|
We retain the option (2) because there is now nothing in the API that specifies any particular limit on 32 (nor 64) since we went |
Right, because we don't have > 32 protocols.
That day, a non-scalar There are then 2 options:
Changes in this PR fit option 2. |
The current tool code still makes the tool do assumptions about how many the protocols are and that they will fit in a single scalar variable. It does not know this and there is no guarantee in any API that they will. Even if we expand that limit (for most systems) with going to |
|
||
#ifndef PROTO_TYPE_SMALL | ||
typedef curl_off_t curl_prot_t; | ||
#else |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This adds code and a comment for the future only, and I am generally against that and I am now too. Because most commonly, once that future arrives, some things have changed anyway causing things to get done differently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would you then prepare the way for a developer to submit us a PR for a new protocol ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adjust that commit to what is needed for the new protocol. I presume that would then remove the comment that says something about the future with actual code for the work in progress.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
(Speaking about the tool) Yes, but this is yet at least a superset of what is supported in the lib. |
Not only universal, also the not making assumptions about the number of protocols version. I don't think there's a hurry to fix this, but I think it would be prudent and correct to do it.
I'm not really following, but yes the protocol supported should be shown alpha-sorted for |
It doesn't really make assumptions: there are of course some limitations (like static array size or dynbuf max length), but protocol numbers are private to the tool and only limited by the |
In
... is an assumption that there is less than 33 or 65 protocols, depending on size of |
That's only an internal limitiation due to the
-V is already OK. the importance of order for the tests is to have a defined order of protocols listed for matches in the |
The tool simply does not know how many protocols the library supports (without dynamically counting the number |
do you intend to support >64 protocols in 10 years ?
Yes, but not more than in the current library. Anyway I agree with you: it should be more scalable. An all-dynamic solution would probably be a luxury, but we should support a large number of protocols via a non-scalar proto_set_t or any other method. A limit would however exist if not all-dynamic. Just a personal question: why are you so preoccupated by the future of the tool and opposed to advanced prepration of the lib at the same time ? This appears to me as antagonist, although distinct. |
No, I use that to illustrate how we should make the tool only use the knowledge it has, not assume things.
The library doesn't assume it knows. |
I think we should use the API and not guess what goes beyond it. Future, history and current. |
|
Move the curl_prot_t to its own conditional block. Introduce symbol PROTO_TYPE_SMALL to control it. Fix a cast in a curl_prot_t assignment. Remove an outdated comment. Follow-up to cd5ca80.
#define PROTO_TYPE_SMALL | ||
|
||
#ifndef PROTO_TYPE_SMALL | ||
typedef curl_off_t curl_prot_t; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't this check be easier and make more sense if reversed?
Like with #ifdef PROTY_TYPE_BIG
...
So that we have a define we set when we cross the 32 boundary and need to use a larger type? Especially since that new protocol will require a 64 bit data type, so going beyond 32 needs to be conditional (at least until we require a 64 bit type).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. I started by implementing it as PROTO_TYPE_LARGE
, but I noted this is easier to undefine a symbol than to redefine it and I reversed the logic.
#undef
can be applied on an undefined symbol: it's just ignored.#define
might redefine an existing symbol only if its value does not change. But this is implementation-dependent, thus we better consider it may not.
When several new protocols (!) will be introduced, they will all need to act on this, so this will be easier to simply #undef
the symbol than to redefine it for each protocol.
I know this is future, but we have to prepare for it :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, fair enough. We can always change it later if we think of improvements down the line.
Thanks! |
Thanks for pulling! |
In the library, complete support for more than 32 protocols:
Large prototype bit mask type is also introduced in the cli tool. As there is no way to determine whether the library supports high protocols at compile time, this is unconditional.
Follow-up to cd5ca80.