lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZYGPZUerlEaCVRq8@dread.disaster.area>
Date: Tue, 19 Dec 2023 23:41:09 +1100
From: Dave Chinner <david@...morbit.com>
To: Aleksandr Nogikh <nogikh@...gle.com>
Cc: Alexander Potapenko <glider@...gle.com>,
	Dave Chinner <dchinner@...hat.com>,
	syzbot+a6d6b8fffa294705dbd8@...kaller.appspotmail.com, hch@....de,
	davem@...emloft.net, herbert@...dor.apana.org.au,
	linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
	syzkaller-bugs@...glegroups.com, linux-xfs@...r.kernel.org
Subject: Re: [syzbot] [crypto?] KMSAN: uninit-value in __crc32c_le_base (3)

On Mon, Dec 18, 2023 at 11:22:40AM +0100, Aleksandr Nogikh wrote:
> Hi Dave,
> 
> > KMSAN has been used for quite a long time with syzbot, however,
> > and it's supposed to find these problems, too. Yet it's only been
> > finding this for 6 months?
> 
> As Alex already mentioned, there were big fs fuzzing improvements in
> 2022, and that's exactly when we started seeing "KMSAN: uninit-value
> in __crc32c_le_base" (I've just checked crash history). Before that
> moment the code was likely just not exercised on syzbot.

Can you tell us what these "big fuzzing improvements" were? I mean,
you're trying to fuzz our code and we've been working on rejecting
fuzzing for the last 15 years, so if you're doing something novel it
would help us work out how to defeat it quickly and effciently.

> On Fri, Dec 15, 2023 at 10:59 PM 'Dave Chinner' via syzkaller-bugs
> <syzkaller-bugs@...glegroups.com> wrote:
> >
> > On Fri, Dec 15, 2023 at 03:41:49PM +0100, Alexander Potapenko wrote:
> > >
> > > You are right, syzbot used to mount XFS way before 2022.
> > > On the other hand, last fall there were some major changes to the way
> > > syz_mount_image() works, so I am attributing the newly detected bugs
> > > to those changes.
> >
> > Oh, so that's when syzbot first turned on XFS V5 format testing?
> >
> > Or was that done in April, when this issue was first reported?
> >
> > > Unfortunately we don't have much insight into reasons behind syzkaller
> > > being able to trigger one bug or another: once a bug is found for the
> > > first time, the likelihood to trigger it again increases, but finding
> > > it initially might be tricky.
> > >
> > > I don't understand much how trivial is the repro at
> > > https://gist.github.com/xrivendell7/c7bb6ddde87a892818ed1ce206a429c4,
> >
> > I just looked at it - all it does is create a new file. It's
> > effectively "mount; touch", which is exactly what I said earlier
> > in the thread should reproduce this issue every single time.
> >
> > > but overall we are not drilling deep enough into XFS.
> > > https://storage.googleapis.com/syzbot-assets/8547e3dd1cca/ci-upstream-kmsan-gce-c7402612.html
> > > (ouch, 230Mb!) shows very limited coverage.
> >
> > *sigh*
> >
> > Did you think to look at the coverage results to check why the
> > numbers for XFS, ext4 and btrfs are all at 1%?
> 
> Hmmm, thanks for pointing it out!
> 
> Our ci-upstream-kmsan-gce instance is configured in such a way that
> the fuzzer program is quite restricted in what it can do. Apparently,
> it also lacks capabilities to do mounts, so we get almost no coverage
> in fs/*/**. I'll check whether the lack of permissions to mount() was
> intended.
> 
> On the other hand, the ci-upstream-kmsan-gce-386 instance does not
> have such restrictions at all and we do see fs/ coverage there:
> https://storage.googleapis.com/syzbot-assets/609dc759f08b/ci-upstream-kmsan-gce-386-0e389834.html
> 
> It's still quite low for fs/xfs, which is explainable -- we almost
> immediately hit "KMSAN: uninit-value in __crc32c_le_base". For the
> same reason, it's also somewhat lower than could be elsewhere as well
> -- we spend too much time restarting VMs after crashes. Once the fix
> patch reaches the fuzzed kernel tree, ci-upstream-kmsan-gce-386 should
> be back to normal.
> 
> If we want to see how deep syzbot can go into the fs/ code in general,
> it's better to look at the KASAN instance coverage:
> https://storage.googleapis.com/syzbot-assets/12b7d6ca74e6/ci-upstream-kasan-gce-root-0e389834.html
>  (*)
> 
> Here e.g. fs/ext4 is already 63% and fs/xfs is 16%.

Actually, that XFS number is an excellent result. I don't think we
can do much better than that.

I know, that's not the response you expected.

Everyone knows that higher coverage numbers are better because it
means we've tested more code, right?

Wrong.

When it comes to fuzzing based attacks, the earlier the bad data is
detected and rejected the better the result. We should see lower
coverage of the code the better the detection and rejection
algorithms get.  i.e. The detection code should be extensively
covered, but the rest of the code should have very little coverage
because of how quickly the filesystem reacts to fatal object
corruption.

And the evidence for this in the XFS coverage results?

Take a look at fs/xfs/libxfs/xfs_inode_buf.c. Every single line of
the disk inode format verifiers has been covered (i.e. every
possible corruption case we can detect has been exercised).

That's good.

However, there is zero coverage of core formatting functions like
xfs_inode_to_disk() that indicate no inodes have been successfully
modified and written back to disk.

That's *even better*.

Think about that for a minute.

The coverage data is telling us that we've read lots of corrupt
inodes and rejected them, but the testing has made almost no
successful inode modifications that have been written back to stable
storage. That's because of widespread corruption in the images
resulting in a fatal corruption being detected before modofications
are being made or are being aborted before they are pushed back to
the corrupt image.

The same pattern appears for most other major on-disk subsystems.
They either have not been exercised at all (e.g. extent btree code) or
the only code in the subsystem that has significant coverage is the
object lookup code and the format verifiers the lookup code runs.

This is an excellent result because it proves that XFS is detecting
the majority of corrupt structures in it's initial object
search iteration paths. Corruption is not getting past the
first read from disk and so no code other than the search/lookup
code and the verifiers is getting run.

Put simply: we are not letting corrupt structures get into code
paths where they can be mis-interpretted and do damage.

>From my perspective as an experienced filesystem developer, this is
exactly the sort of coverage pattern I would like to see from -all
filesystems- when they are fed nothing but extensively corrupted
filesystems the way syzbot does.

The basic truth is that if filesystems are good at corruption
detection and rejection, they should have very low code coverage
numbers from syzbot testing.

-Dave.

-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ