lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Aug 2021 19:08:45 +0200
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     regressions@...ts.linux.dev,
        Thorsten Leemhuis <linux@...mhuis.info>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Guillaume Tucker <guillaume.tucker@...labora.com>,
        automated-testing@...toproject.org,
        Sasha Levin <sashalevin@...gle.com>,
        Marco Elver <elver@...gle.com>,
        syzkaller <syzkaller@...glegroups.com>,
        Mara Mihali <mihalimara22@...il.com>
Subject: finding regressions with syzkaller

Hi,

I want to give an overview of an idea and an early prototype we
developed as part of an intern project. This is not yet at the stage
of producing real results, but I just wanted to share the idea with
you and maybe get some feedback.

The idea is to generate random test programs (as syzkaller does) and
then execute them on 2 different kernels and compare results (so
called "differential fuzzing"). This has the potential of finding not
just various "crashes" but also logical bugs and regressions.

Initially we thought of comparing Linux with gVisor or FreeBSD on a
common subset of syscalls. But it turns out we can also compare
different versions of Linux (LTS vs upstream, or different LTS
versions, or LTS .1 with .y) to find any changes in
behavior/regressions. Ultimately such an approach could detect and
report a large spectrum of various small and large changes in various
subsystems automatically and potentially even bisect the commit that
introduces the difference.

In the initial version we only considered returned errno's (including
0/success) as "results" of execution of a program. But theoretically
it should be enough to sense lots of differences, e.g. if a file state
is different that it can be sensed with a subsequent read returning
different results.

The major issue is various false positive differences caused by
timings, non-determinism, accumulated state, intentional and
semi-intentional changes (e.g. subtle API extensions), etc. We learnt
how to deal with some of these to some degree, but feasibility is
still an open question.

So far we were able to find few real-ish differences, the most
interesting I think is this commit:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d25e3a3de0d6fb2f660dbc7d643b2c632beb1743
which silently does s/EBADF/ENXIO/:

- f = fdget(p->wq_fd);
- if (!f.file)
-     return -EBADF;
+ f = fdget(p->wq_fd);
+ if (!f.file)
+     return -ENXIO;

I don't know how important this difference is, but I think it's
exciting and promising that the tool was able to sense this change.

The other difference we discovered is caused by this commit:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=97ba62b278674293762c3d91f724f1bb922f04e0

Which adds attr->sigtrap:
+ if (attr->sigtrap && !attr->remove_on_exec)
+     return -EINVAL;

So the new kernel returns EINVAL for some input, while the old kernel
did not recornize this flag and returned E2BIG. This is an example of
a subtle API extension, which represent a problem for the tool (bolder
API changes like a new syscall, or a new /dev node are easier to
handle automatically).

If you are interested in more info, here are some links:
https://github.com/google/syzkaller/blob/master/docs/syz_verifier.md
https://github.com/google/syzkaller/issues/692
https://github.com/google/syzkaller/issues/200

Since this work is in very early stage, I only have very high-level questions:
 - what do you think about feasibility/usefulness of this idea in general?
 - any suggestions on how to make the tool find more differences/bugs
or how to make it more reliable?
 - is there a list or pointers to some known past regressions that
would be useful to find with such tool? (I've looked at the things
reported on the regressions@ list, but it's mostly crashes/not
booting, but that's what syzkaller can find already well)
 - anybody else we should CC?

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ