lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 Dec 2019 11:42:39 +0100
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     "Jason A. Donenfeld" <Jason@...c4.com>
Cc:     netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        David Miller <davem@...emloft.net>,
        Greg KH <gregkh@...uxfoundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" 
        <linux-crypto@...r.kernel.org>
Subject: Re: [PATCH net-next v2] net: WireGuard secure network tunnel

On Thu, Dec 19, 2019 at 11:07 AM Jason A. Donenfeld <Jason@...c4.com> wrote:
>
> On Thu, Dec 19, 2019 at 10:35 AM Dmitry Vyukov <dvyukov@...gle.com> wrote:
> > > Is this precise enough for race
> > > condition bugs?
> >
> > It's finding lots of race conditions provoked bugs (I would say it's
> > the most common cause of kernel bugs).
>
> I meant -- are the reproducers it makes precise enough to retrigger
> network-level race conditions?

We provide a simple invariant: if it claims a reproducer, it was able
to provide the exact reported crash using that exact program on a
freshly booted kernel.
However, the crash may be reproducible on the first iteration, or may
require running it for a few seconds/minutes. And obviously for race
conditions the rate on your machine may be different (in both
directions) from the rate on the syzbot machine.
Reproducers don't try to fix the exact execution (generally not
feasible), instead they are just threaded stress tests with some
amount of natural randomness is execution. But as I notes, it was able
to trigger that exact crash using that exact program.

A shorter version: it's good enough to tr-trigger lots of race conditions.



> > Well, you are missing that wireguard is not the only subsystem
> > syzkaller tests (in fact, it does not test it at all) and there are
> > 3000 other subsystems :)
>
> Oooo! Everything is tested at the same time. I understand now; that
> makes a lot more sense.

Yes, it's generally whole kernel. Partitioning it into 3000 separate
instances is lots of problems on multiple fronts and in the end it's
not really possible to draw strict boundaries, in end whole kernel is
tied via mm/fs/pipes/splice/vmslice/etc. E.g. what if you vmsplice
some device-mapped memory into wireguard using io_uring and setup some
bpf filter somewhere and ptrace it at the same time while sending a
signal? :)


> I'll look into splitting out the option, as you've asked. Note,
> though, that there are currently only three spots that have the "extra
> checks" at the moment, and one of them can be optimized out by the
> compiler with aggressive enough inlining added everywhere. The other
> two will result in an immediately corrupted stack frame that should be
> caught immediately by other things. So for now, I think you can get
> away with turning the debug option off, and you won't be missing much
> from the "extra checks", at least until we add more.

I see. Maybe something to keep in mind for future.

> That's exciting about syzcaller having at it with WireGuard. Is there
> some place where I can "see" it fuzzing WireGuard, or do I just wait
> for the bug reports to come rolling in?

Well, unfortunately it does not test wireguard at the moment. I've
enabled the config as I saw it appeared in linux-next:
https://github.com/google/syzkaller/commit/240ba66ba8a0a99f27e1aac01f376331051a65c2
but that's it for now.
There are 3000 subsystems, are you ready to describe precise interface
for all of them with all the necessary setup and prerequisites? Nobody
can do it for all subsystems. Developer of a particular subsystem is
the best candidate for also describing what it takes to test it ;)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ