[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8465de6a-3eee-492e-8d82-d1ea3a3c4c05@kernel.org>
Date: Tue, 22 Oct 2024 11:11:29 +0200
From: Matthieu Baerts <matttbe@...nel.org>
To: Sasha Levin <sashal@...nel.org>
Cc: ksummit@...ts.linux.dev, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org
Subject: Re: linus-next: improving functional testing for to-be-merged pull
requests
Hi Sasha,
Thank you for your replies!
On 21/10/2024 19:36, Sasha Levin wrote:
> On Mon, Oct 21, 2024 at 07:18:38PM +0200, Matthieu Baerts wrote:
>> On 21/10/2024 18:07, Sasha Levin wrote:
(...)
>>> 4. Continuous tree (not daily tags like in linux-next),
>>> facilitating easier bisection
>>
>> What will happen when a pull request is rejected?
>
> My mental playbook is:
>
> 1. If a pull request is just ignored, ping it in case it was forgotten.
> 2. If we have an explicit NACK, just revert the merge commit.
Hopefully these reverts will be exceptional, because they can quickly be
hard to manage!
>> (...)
>>
>>> We also want to avoid altering the existing workflow. In particular:
>>>
>>> 1. No increase in latency. If anything, the expectation is that
>>> the cadence of merges would be improved given that Linus will
>>> need to do less builds and tests.
>>>
>>> 2. Require "sign up" for the tree like linux-next does. Instead,
>>> pull requests are monitored and grabbed directly from the
>>> mailing list.
>>
>> Out of curiosity: is it done automatically? Will it email someone when a
>> conflict is found?
>
> So it's 80% automatic now: my scripts monitor emails using lei, parse
> relevant ones and manage to extract the pull instructions out of them,
> and then most of those pull requests just merge cleanly.
>
> There are some with conflicts, but since Linus insists on having an
> explanation for merge conflicts, those pull requsts contain those
> instructions within them. In those cases I manually followed the
> instructions to resolve the conflicts (which were trivial so far).
>
> I'll likely send a mail out *only* if I see a non-trivial merge conflict
> without an explanation in the body.
OK, thank you!
>> (...)
>>
>>> Current testing:
>>> - LKFT: https://qa-reports.linaro.org/lkft/sashal-linus-next/
>>> - KernelCI: https://t.ly/KEW7F
>>
>> That's great to have more tests being executed! Who is going to monitor
>> the results? This task can quickly take time if this person also has to
>> check for false positives and flaky tests.
>>
>> Are the maintainers supposed to regularly monitor the results for the
>> tests they are responsible for? Or will they be (automatically?) emailed
>> when there is a regression?
>
> I'm not sure about this part. While I look at it in and will likely send
> a mail out if I see something fishy, the only change in workflow that I
> hope will happen here is Linus looking at a dashboard or two before he
> begins his daily merge session.
OK, thank you! I find these dashboards not so easy to read: there are
many tests, and it is not always clear what they are doing or how
important they are. Yes it is possible to find the history and check if
a test is known as being unstable, but there are no indicators to show
that directly, nor a global one saying "OK to pull".
What I want to say is that I hope these dashboards will help, and not
just to say "look, we are running tests", but nobody is actually looking
at the results :)
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
Powered by blists - more mailing lists