[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <067b8eea-3c77-c1f0-8e68-b99e6bf0c033@leemhuis.info>
Date: Wed, 11 Aug 2021 13:25:54 +0200
From: Thorsten Leemhuis <linux@...mhuis.info>
To: Dmitry Vyukov <dvyukov@...gle.com>, regressions@...ts.linux.dev
Cc: LKML <linux-kernel@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Guillaume Tucker <guillaume.tucker@...labora.com>,
automated-testing@...toproject.org,
Sasha Levin <sashalevin@...gle.com>,
Marco Elver <elver@...gle.com>,
syzkaller <syzkaller@...glegroups.com>,
Mara Mihali <mihalimara22@...il.com>,
Lukas Bulwahn <lukas.bulwahn@...il.com>
Subject: Re: finding regressions with syzkaller
[CCing Lukas]
Hi Dmitry!
On 10.08.21 19:08, Dmitry Vyukov wrote:
> [...]
> The idea is to generate random test programs (as syzkaller does) and
> then execute them on 2 different kernels and compare results (so
> called "differential fuzzing"). This has the potential of finding not
> just various "crashes" but also logical bugs and regressions.
Hmmm, interesting concept!
> The major issue is various false positive differences caused by
> timings, non-determinism, accumulated state, intentional and
> semi-intentional changes (e.g. subtle API extensions), etc. We learnt
> how to deal with some of these to some degree, but feasibility is
> still an open question.
Sounds complicated and like a lot of manual work.
Do you have in mind that Linus and hence many other Kernel developers
afaics only care about regressions someone actually observed in a
practice? Like a software or script breaking due to a kernel-side change?
To quote Linus from
https://lore.kernel.org/lkml/CA+55aFx3RswnjmCErk8QhCo0KrCvxZnuES3WALBR1NkPbUZ8qw@mail.gmail.com/
```The Linux "no regressions" rule is not about some theoretical
"the ABI changed". It's about actual observed regressions.
So if we can improve the ABI without any user program or workflow
breaking, that's fine.```
His stance on that afaik has not changed since then.
Thus after ruling our all false positives syzkaller might find, there
will always be the follow-up question "well, does anything/anyone
actually care?". That might be hard to answer and requires yet more
manual work by some human. Maybe this working hours at least for now are
better spend in other areas.
> Since this work is in very early stage, I only have very high-level questions:
> - what do you think about feasibility/usefulness of this idea in general?
TBH I'm a bit sceptical due to the above factors. Don't get me wrong,
making syzkaller look out for regressions sounds great, but I wonder if
there are more pressing issues that are worth getting at first.
Another aspect: CI testing already finds quite a few regressions, but
those that are harder to catch are afaics often in driver code. And you
often can't test that without the hardware, which makes me assume that
syzkaller wouldn't help here (or am I wrong?)
> - any suggestions on how to make the tool find more differences/bugs
> or how to make it more reliable?
> - is there a list or pointers to some known past regressions that
> would be useful to find with such tool? (I've looked at the things
> reported on the regressions@ list, but it's mostly crashes/not
> booting, but that's what syzkaller can find already well)
I first wanted to tell you "look up the reports I compiled in 2017 in
the LKML archives", but I guess the way better solution is: just grep
for "regression" in the commit log.
> - anybody else we should CC?
I guess the people from the Elisa project might be interested in this,
that's why I CCed Lukas.
Ciao, Thorsten
Powered by blists - more mailing lists