lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL3q7H4C9+e6MZGYKgmGMZbwOvRgSJtN2vf7-9dCzmeUAsuYCg@mail.gmail.com>
Date:   Fri, 17 May 2019 16:33:56 +0100
From:   Filipe Manana <fdmanana@...nel.org>
To:     "Theodore Ts'o" <tytso@....edu>
Cc:     fstests <fstests@...r.kernel.org>,
        linux-btrfs <linux-btrfs@...r.kernel.org>,
        linux-ext4 <linux-ext4@...r.kernel.org>, Jan Kara <jack@...e.cz>,
        Filipe Manana <fdmanana@...e.com>
Subject: Re: [PATCH] fstests: generic, fsync fuzz tester with fsstress

On Thu, May 16, 2019 at 6:18 PM Filipe Manana <fdmanana@...nel.org> wrote:
>
> On Thu, May 16, 2019 at 5:59 PM Theodore Ts'o <tytso@....edu> wrote:
> >
> > On Thu, May 16, 2019 at 10:54:57AM +0100, Filipe Manana wrote:
> > >
> > > Haven't tried ext4 with 1 process only (instead of 4), but I can try
> > > to see if it happens without concurrency as well.
> >
> > How many CPU's and how much memory were you using?  And I assume this
> > was using KVM/QEMU?  How was it configured?
>
> Yep, kvm and qemu (3.0.0). The qemu config:
>
> https://pastebin.com/KNigeXXq
>
> TEST_DEV is the drive with ID "drive1" and SCRATCH_DEV is the drive
> with ID "drive2".
>
> The host has:
>
> Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
> 64Gb of ram
> crappy seagate hdd:
>
> Device Model:     ST3000DM008-2DM166
> Serial Number:    Z5053T2R
> LU WWN Device Id: 5 000c50 0a46f7ecb
> Firmware Version: CC26
> User Capacity:    3,000,592,982,016 bytes [3,00 TB]
> Sector Sizes:     512 bytes logical, 4096 bytes physical
> Rotation Rate:    7200 rpm
> Form Factor:      3.5 inches
> Device is:        Not in smartctl database [for details use: -P showall]
> ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
> SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
>
> It hosts 3 qemu instances, all with the same configuration.
>
> I left the test running earlier today for about 1 hour on ext4 with
> only 1 fsstress process. Didn't manage to reproduce.
> With 4 or more processes, those journal checksum failures happen sporadically.
> I can leave it running with 1 process during this evening and see what
> we get here, if it happens with 1 process, it should be trivial to
> reproduce anywhere.

Ok, so I left it running overnight, for 17 000+ iterations. It failed
102 times with that journal corruption.
I changed the test to randomize the number of fsstress processes
between 1 to 8. I'm attaching here the logs (.full, .out.bad and dmesg
files) in case you are interested in the seed values for fsstress.

So the test does now:

(...)
procs=$(( (RANDOM % 8) + 1 ))
args=`_scale_fsstress_args -p $procs -n 100 $FSSTRESS_AVOID -d
$SCRATCH_MNT/test`
args="$args -f mknod=0 -f symlink=0"
echo "Running fsstress with arguments: $args" >>$seqres.full
(...)

I verified no failures happened with only 1 process, and the more
processes are used, the more likely it is to hit the issue:

$ egrep -r 'Running fsstress with arguments: -p' . | cut -d ' ' -f 6 |
perl -n -e 'use Statistics::Histogram; @data = <>; chomp @data; print
get_histogram(\@data);'
Count: 102
Range:  2.000 -  8.000; Mean:  5.598; Median:  6.000; Stddev:  1.831
Percentiles:  90th:  8.000; 95th:  8.000; 99th:  8.000
   2.000 -    2.348:     5 ##############
   2.348 -    3.171:    13 ####################################
   3.171 -    4.196:    12 #################################
   4.196 -    5.473:    15 ##########################################
   5.473 -    6.225:    19 #####################################################
   6.225 -    7.064:    19 #####################################################
   7.064 -    8.000:    19 #####################################################

And verified picking one of the failing seeds, such as 1557322233 for
2 processes, and running the test with that seed for 10 times didn't
reproduce, so it indeed seems to be some race causing the journal
corruption.

Forgot previously, but my kernel config in case it helps:
https://pastebin.com/LKvRcAW1

Thanks.

>
> >
> > Thanks,
> >
> >                                         - Ted

Download attachment "ext4_generic_547_logs.tar.xz" of type "application/x-xz" (14024 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ