Buy commercial curl support from WolfSSL. We help you work
out your issues, debug your libcurl applications, use the API, port to new
platforms, add new features and more. With a team lead by the curl founder
himself.
Re: A CI job inventory
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Timothe Litt <litt_at_acm.org>
Date: Mon, 7 Feb 2022 18:56:14 -0500
On 07-Feb-22 18:29, Daniel Stenberg wrote:
> On Mon, 7 Feb 2022, Timothe Litt via curl-library wrote:
>
>> I see backends, but where are the protocols? E.g. does a job test
>> http(s)? FTP? etc.
>
> It seems sensible to generate a list of the supported protocol each CI
> builds, yes. To help generate a better "coverage map".
>
>> Consider how to measure effectiveness - 100 seems like a lot.
>> coverage? How many bugs found? Recently?
>
> That would be interesting, but we really have no way of measuring that.
>
Maybe not precisely, but an approximation is possible. See my response
to Dan's note. A real database can be kept. A CI job's exit status can
be captured, timestamped, and recorded. PRs can be tagged with the CI
job that found an issue. Then you can get found bugs found/job - and a
time series. That's a reasonable proxy for effectiveness.
Tags can also let you categorize bug types - e.g. memory leaks, buffer
overflows, protocol errors...
>> May be worth running most effective first, or tagging PRs with area
>> affected so CI can test that first. Or running less effective jobs
>> less frequently...
>
> The CI jobs do their best work on PRs, and for those there are no
> "less frequently" than once for each push since the CI jobs should
> then help detect problems in that particular push.
>
>
You can manually trigger once a week, or before a release, every 10th
PR, or... You can also script "manual" triggers.
The point is that if a job provides coverage, but rarely finds problems,
running it frequently wastes resources - not just computes (an
environment issue) but the time you spend waiting for the effective jobs
to run.
If the protocol or platform that you fixed is the 199th job in the
queue, you're waiting for feedback. That's expensive too. Even if you
do something else, you lose task switching time. More intelligent
scheduling helps.
> Timothe Litt
> ACM Distinguished Engineer
> --------------------------
> This communication may not represent the ACM or my employer's views,
> if any, on the matters discussed.
Received on 2022-02-08
Date: Mon, 7 Feb 2022 18:56:14 -0500
On 07-Feb-22 18:29, Daniel Stenberg wrote:
> On Mon, 7 Feb 2022, Timothe Litt via curl-library wrote:
>
>> I see backends, but where are the protocols? E.g. does a job test
>> http(s)? FTP? etc.
>
> It seems sensible to generate a list of the supported protocol each CI
> builds, yes. To help generate a better "coverage map".
>
>> Consider how to measure effectiveness - 100 seems like a lot.
>> coverage? How many bugs found? Recently?
>
> That would be interesting, but we really have no way of measuring that.
>
Maybe not precisely, but an approximation is possible. See my response
to Dan's note. A real database can be kept. A CI job's exit status can
be captured, timestamped, and recorded. PRs can be tagged with the CI
job that found an issue. Then you can get found bugs found/job - and a
time series. That's a reasonable proxy for effectiveness.
Tags can also let you categorize bug types - e.g. memory leaks, buffer
overflows, protocol errors...
>> May be worth running most effective first, or tagging PRs with area
>> affected so CI can test that first. Or running less effective jobs
>> less frequently...
>
> The CI jobs do their best work on PRs, and for those there are no
> "less frequently" than once for each push since the CI jobs should
> then help detect problems in that particular push.
>
>
You can manually trigger once a week, or before a release, every 10th
PR, or... You can also script "manual" triggers.
The point is that if a job provides coverage, but rarely finds problems,
running it frequently wastes resources - not just computes (an
environment issue) but the time you spend waiting for the effective jobs
to run.
If the protocol or platform that you fixed is the 199th job in the
queue, you're waiting for feedback. That's expensive too. Even if you
do something else, you lose task switching time. More intelligent
scheduling helps.
> Timothe Litt
> ACM Distinguished Engineer
> --------------------------
> This communication may not represent the ACM or my employer's views,
> if any, on the matters discussed.
-- Unsubscribe: https://lists.haxx.se/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
- application/pgp-signature attachment: OpenPGP digital signature