lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 16 May 2019 05:28:48 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     fdmanana@...nel.org
Cc:     fstests@...r.kernel.org, linux-btrfs@...r.kernel.org,
        linux-ext4@...r.kernel.org, jack@...e.cz,
        Filipe Manana <fdmanana@...e.com>
Subject: Re: [PATCH] fstests: generic, fsync fuzz tester with fsstress

On Wed, May 15, 2019 at 04:02:21PM +0100, fdmanana@...nel.org wrote:
> From: Filipe Manana <fdmanana@...e.com>
> 
> Run fsstress, fsync every file and directory, simulate a power failure and
> then verify the all files and directories exist, with the same data and
> metadata they had before the power failure.
> 
> This tes has found already 2 bugs in btrfs, that caused mtime and ctime of
> directories not being preserved after replaying the log/journal and loss
> of a directory's attributes (such a UID and GID) after replaying the log.
> The patches that fix the btrfs issues are titled:
> 
>   "Btrfs: fix wrong ctime and mtime of a directory after log replay"
>   "Btrfs: fix fsync not persisting changed attributes of a directory"
> 
> Running this test 1000 times:
> 
> - on ext4 it has resulted in about a dozen journal checksum errors (on a
>   5.0 kernel) that resulted in failure to mount the filesystem after the
>   simulated power failure with dmflakey, which produces the following
>   error in dmesg/syslog:
> 
>     [Mon May 13 12:51:37 2019] JBD2: journal checksum error
>     [Mon May 13 12:51:37 2019] EXT4-fs (dm-0): error loading journal

I'm curious what configuration you used when you ran the test.  I
tried to reproduce it, and had no luck:

TESTRUNID: tytso-20190516042341
KERNEL:    kernel 5.1.0-rc3-xfstests-00034-g0c72924ef346 #999 SMP Wed May 15 00:56:08 EDT 2019 x86_64
CMDLINE:   -c 4k -C 1000 generic/547
CPUS:      2
MEM:       7680

ext4/4k: 1000 tests, 1855 seconds
Totals: 1000 tests, 0 skipped, 0 failures, 0 errors, 1855s

FSTESTPRJ: gce-xfstests
FSTESTVER: blktests baccddc (Wed, 13 Mar 2019 00:06:50 -0700)
FSTESTVER: fio  fio-3.2 (Fri, 3 Nov 2017 15:23:49 -0600)
FSTESTVER: fsverity bdebc45 (Wed, 5 Sep 2018 21:32:22 -0700)
FSTESTVER: ima-evm-utils 0267fa1 (Mon, 3 Dec 2018 06:11:35 -0500)
FSTESTVER: nvme-cli v1.7-35-g669d759 (Tue, 12 Mar 2019 11:22:16 -0600)
FSTESTVER: quota  62661bd (Tue, 2 Apr 2019 17:04:37 +0200)
FSTESTVER: stress-ng 7d0353cf (Sun, 20 Jan 2019 03:30:03 +0000)
FSTESTVER: syzkaller bab43553 (Fri, 15 Mar 2019 09:08:49 +0100)
FSTESTVER: xfsprogs v5.0.0 (Fri, 3 May 2019 12:14:36 -0500)
FSTESTVER: xfstests-bld 9582562 (Sun, 12 May 2019 00:38:51 -0400)
FSTESTVER: xfstests linux-v3.8-2390-g64233614 (Thu, 16 May 2019 00:12:52 -0400)
FSTESTCFG: 4k
FSTESTSET: generic/547
FSTESTOPT: count 1000 aex
GCE ID:    8592267165157073108

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ