[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be2fa62f-f4d3-4b1c-984d-698088908ff3@sirena.org.uk>
Date: Thu, 11 Jan 2024 15:35:40 +0000
From: Mark Brown <broonie@...nel.org>
To: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: Kees Cook <keescook@...omium.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-bcachefs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org,
Nikolai Kondrashov <spbnick@...il.com>
Subject: Re: [GIT PULL] bcachefs updates for 6.8
On Wed, Jan 10, 2024 at 07:58:20PM -0500, Kent Overstreet wrote:
> On Wed, Jan 10, 2024 at 04:39:22PM -0800, Kees Cook wrote:
> > With no central CI, the best we've got is everyone running the same
> > "minimum set" of checks. I'm most familiar with netdev's CI which has
> > such things (and checkpatch.pl is included). For example see:
> > https://patchwork.kernel.org/project/netdevbpf/patch/20240110110451.5473-3-ptikhomirov@virtuozzo.com/
> Yeah, we badly need a central/common CI. I've been making noises that my
> own thing could be a good basis for that - e.g. it shouldn't be much
> work to use it for running our tests in tools/tesing/selftests. Sadly no
> time for that myself, but happy to talk about it if someone does start
> leading/coordinating that effort.
IME the actually running the tests bit isn't usually *so* much the
issue, someone making a new test runner and/or output format does mean a
bit of work integrating it into infrastructure but that's more usually
annoying than a blocker. Issues tend to be more around arranging to
drive the relevant test systems, figuring out which tests to run where
(including things like figuring out capacity on test devices, or how
long you're prepared to wait in interactive usage) and getting the
environment on the target devices into a state where the tests can run.
Plus any stability issues with the tests themselves of course, and
there's a bunch of costs somewhere along the line.
I suspect we're more likely to get traction with aggregating test
results and trying to do UI/reporting on top of that than with the
running things bit, that really would be very good to have. I've copied
in Nikolai who's work on kcidb is the main thing I'm aware of there,
though at the minute operational issues mean it's a bit write only.
> example tests, example output:
> https://evilpiepirate.org/git/ktest.git/tree/tests/bcachefs/single_device.ktest
> https://evilpiepirate.org/~testdashboard/ci?branch=bcachefs-testing
For example looking at the sample test there it looks like it needs
among other things mkfs.btrfs, bcachefs, stress-ng, xfs_io, fio, mdadm,
rsync and a reasonably performant disk with 40G of space available.
None of that is especially unreasonable for a filesystems test but it's
all things that we need to get onto the system where we want to run the
test and there's a lot of systems where the storage requirements would
be unsustainable for one reason or another. It also appears to take
about 33000s to run on whatever system you use which is distinctly
non-trivial.
I certainly couldn't run it readily in my lab.
> > At the very least, checkpatch.pl is the common denominator:
> > https://docs.kernel.org/process/submitting-patches.html#style-check-your-changes
> At one point in my career I was religious about checkpatch; since then
> the warnings it produces have seemed to me more on the naggy and less
> on the useful end of the spectrum - I like smatch better in that
> respect. But - I'll start running it again for the deprecation
> warnings :)
Yeah, I don't run it on incoming stuff because the rate at which it
reports things I don't find useful is far too high.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists