lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240111185930.GA911245@mit.edu>
Date: Thu, 11 Jan 2024 13:59:30 -0500
From: "Theodore Ts'o" <tytso@....edu>
To: Allen <allen.lkml@...il.com>
Cc: linux-ext4@...r.kernel.org, jack@...e.cz,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        stable@...r.kernel.org, Allen Pais <apais@...ux.microsoft.com>,
        kelseysteele@...ux.microsoft.com, tyhicks@...ux.microsoft.com
Subject: Re: EXT4-fs: Intermitent segfault with memory corruption

On Thu, Jan 11, 2024 at 07:26:06AM -0800, Allen wrote:
> 
> I hope this email finds you well. We are reaching out to report a
> persistent issue that we have been facing on Windows Subsystem for
> Linux (WSL)[1] with various kernel versions. We have encountered the
> problem on kernel versions v5.15, v6.1, v6.6 stable kernels, and also
> the current upstream kernel. While the issue takes longer to reproduce
> on v5.15, it is consistently observable across these versions.

You've tried reproducing (successfully) the problem across multiple
kernel versions.  Have you tried reproducing this on multiple
different hardware platforms?  e.g., with different desktops and/or
servers, and with different storage devices?

The symptoms you are reporting are very highly correlated with
hardware problems, or in the case where you are running under
virtualization, with bugs in the VMM and/or the host OS's storage
stack.

In particular, these errors:

> EXT4-fs error (device sdc): ext4_find_dest_de:2092: inode #32168:
> block 2334198: comm dpkg: bad entry in directory: rec_len is smaller
> than minimal - offset=0, inode=0, rec_len=0, size=4084 fake=0
> 
> and
> 
> EXT4-fs warning (device sdc): dx_probe:890: inode #27771: comm dpkg:
> dx entry: limit 0 != root limit 508
> EXT4-fs warning (device sdc): dx_probe:964: inode #27771: comm dpkg:
> Corrupt directory, running e2fsck is recommended
> EXT4-fs error (device sdc): ext4_empty_dir:3098: inode #27753: block
> 133944722: comm dpkg: bad entry in directory: rec_len is smaller than
> minimal - offset=0, inode=0, rec_len=0, size=4096 fake=0

... sesem to hint that ext4 has read a directory block where all or
part of its contents have been replaced with all zeros (hence the
record length, or the hash tree index, is zero).  That typically is
caused by a hardware and/or VMM problem.

> or we see a segfault message where the source can change depending on
> which command we're testing with (dpkg, apt, gcc..):
> 
> dpkg[135]: segfault at 0 ip 00007f9209eb6a19 sp 00007ffd8a6a0b08 error
> 4 in libc-2.31.so[7f9209d6e000+159000] likely on CPU 1 (core 0, socket
> 0)

And this could very well be because a data block has been replaced
with garbage, or the wrong data block, or all zeroes.

It might also be load related --- that is, the problem only shows up
the system is more heavily loaded, which might explain when enabling
debugging causes the problem to be harder to reproduce.


I am very doubtful that the problem is in the ext4 code proper,
especially since no one else has reported this problem, and at $WORK,
we are running continuous testing where we are running fstests runs on
ext4 against a wide range of hardware (e.g., HDD's, SSD's, iscsi,
etc.) and hardware platforms (arm64 and x86).  And that's just for our
data center kernels which are based on various LTS kernels.  For
Google's Compute Optimized OS, which is used in both 1st party and 3rd
party VM's in Google Cloud VM's, we are doing similar testing using
gce-xfstests[1] on a continuous basis, and we haven't seen the kind of
bugs that you are reporting.

[1] https://thunk.org/gce-xfstests.

For that matter, I am regularly running gce-xfstests for ext4's
upstream development, and other ext4 developers run fstests using
kvm-xfstests and fstests on a varriety of different hardware devices
and virtualization environments.  So that tends to suggest that the
problem is either in the hardware or virtualization environment (WSL)
that you are using.


So to that end, you might want to consider running some lower-level
tests --- for example using fio with data verification enabled.  We
also get a huge amount of mileage using fstests to detect problems
lower in the file system stack.  This is why we use fstests/xfstests
on ext4 for essentially every single storage device (such as iSCSI,
HDD, Flash, etc.)  So setting up fstesets on a variety of file systems
and storage devices is not a bad idea.

It shouldn't be difficult to take the test appliance in
kvm-xfstests[2][3] and getting it to work under WSL.  (For example,
over the holidays, I've gotten fstests running on MacOS on a Macbook
Air M2 15" using the hvf framework.)  However, I suggest that you
focus on lower-level block and memory stress testing before worrying
about how to run fstests under WSL.

[2] https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md
[3] https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-xfstests.md

Cheers,

					- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ