[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aLsZe0czym9X9Lo4@li-dc0c254c-257c-11b2-a85c-98b6c1322444.ibm.com>
Date: Fri, 5 Sep 2025 22:40:19 +0530
From: Ojaswin Mujoo <ojaswin@...ux.ibm.com>
To: John Garry <john.g.garry@...cle.com>
Cc: Zorro Lang <zlang@...hat.com>, fstests@...r.kernel.org,
Ritesh Harjani <ritesh.list@...il.com>, djwong@...nel.org,
tytso@....edu, linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-ext4@...r.kernel.org
Subject: Re: [PATCH v5 11/12] ext4: Test atomic writes allocation and write
codepaths with bigalloc
On Tue, Sep 02, 2025 at 04:54:48PM +0100, John Garry wrote:
> On 22/08/2025 09:02, Ojaswin Mujoo wrote:
> > From: "Ritesh Harjani (IBM)" <ritesh.list@...il.com>
> >
> > This test does a parallel RWF_ATOMIC IO on a multiple truncated files in
> > a small FS. The idea is to stress ext4 allocator to ensure we are able
> > to handle low space scenarios correctly with atomic writes. We brute
> > force this for different blocksize and clustersizes and after each
> > iteration we ensure the data was not torn or corrupted using fio crc
> > verification.
> >
> > Note that in this test we use overlapping atomic writes of same io size.
> > Although serializing racing writes is not guaranteed for RWF_ATOMIC,
> > NVMe and SCSI provide this guarantee as an inseparable feature to
> > power-fail atomicity. Keeping the iosize as same also ensures that ext4
> > doesn't tear the write due to racing ioend unwritten conversion.
> >
> > The value of this test is that we make sure the RWF_ATOMIC is handled
> > correctly by ext4 as well as test that the block layer doesn't split or
> > only generate multiple bios for an atomic write.
> >
> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@...il.com>
> > Reviewed-by: Darrick J. Wong <djwong@...nel.org>
> > Signed-off-by: Ojaswin Mujoo <ojaswin@...ux.ibm.com>
> > ---
> > tests/ext4/062 | 203 +++++++++++++++++++++++++++++++++++++++++++++
> > tests/ext4/062.out | 2 +
> > 2 files changed, 205 insertions(+)
> > create mode 100755 tests/ext4/062
> > create mode 100644 tests/ext4/062.out
> >
> > diff --git a/tests/ext4/062 b/tests/ext4/062
> > new file mode 100755
> > index 00000000..d48f69d3
> > --- /dev/null
> > +++ b/tests/ext4/062
> > @@ -0,0 +1,203 @@
> > +#! /bin/bash
> > +# SPDX-License-Identifier: GPL-2.0
> > +# Copyright (c) 2025 IBM Corporation. All Rights Reserved.
> > +#
> > +# FS QA Test 062
> > +#
> > +# This test does a parallel RWF_ATOMIC IO on a multiple truncated files in a
> > +# small FS. The idea is to stress ext4 allocator to ensure we are able to
> > +# handle low space scenarios correctly with atomic writes.. We brute force this
> > +# for all possible blocksize and clustersizes and after each iteration we
> > +# ensure the data was not torn or corrupted using fio crc verification.
> > +#
> > +# Note that in this test we use overlapping atomic writes of same io size.
> > +# Although serializing racing writes is not guaranteed for RWF_ATOMIC, NVMe and
> > +# SCSI provide this guarantee as an inseparable feature to power-fail
> > +# atomicity. Keeping the iosize as same also ensures that ext4 doesn't tear the
> > +# write due to racing ioend unwritten conversion.
> > +#
> > +# The value of this test is that we make sure the RWF_ATOMIC is handled
> > +# correctly by ext4 as well as test that the block layer doesn't split or only
> > +# generate multiple bios for an atomic write.
> > +#
> > +
> > +. ./common/preamble
> > +. ./common/atomicwrites
> > +
> > +_begin_fstest auto rw stress atomicwrites
> > +
> > +_require_scratch_write_atomic
> > +_require_aiodio
> > +_require_fio_version "3.38+"
> > +
> > +FSSIZE=$((360*1024*1024))
> > +FIO_LOAD=$(($(nproc) * LOAD_FACTOR))
> > +
> > +# Calculate bs as per bdev atomic write units.
> > +bdev_awu_min=$(_get_atomic_write_unit_min $SCRATCH_DEV)
> > +bdev_awu_max=$(_get_atomic_write_unit_max $SCRATCH_DEV)
> > +bs=$(_max 4096 "$bdev_awu_min")
> > +
> > +function create_fio_configs()
> > +{
> > + local bsize=$1
> > + create_fio_aw_config $bsize
> > + create_fio_verify_config $bsize
> > +}
> > +
> > +function create_fio_verify_config()
> > +{
> > + local bsize=$1
> > +cat >$fio_verify_config <<EOF
> > + [global]
> > + direct=1
> > + ioengine=libaio
> > + rw=read
> > + bs=$bsize
> > + fallocate=truncate
> > + size=$((FSSIZE / 12))
> > + iodepth=$FIO_LOAD
> > + numjobs=$FIO_LOAD
> > + group_reporting=1
> > + atomic=1
> > +
> > + verify_only=1
> > + verify_state_save=0
> > + verify=crc32c
> > + verify_fatal=1
> > + verify_write_sequence=0
> > +
> > + [verify-job1]
> > + filename=$SCRATCH_MNT/testfile-job1
> > +
> > + [verify-job2]
> > + filename=$SCRATCH_MNT/testfile-job2
> > +
> > + [verify-job3]
> > + filename=$SCRATCH_MNT/testfile-job3
> > +
> > + [verify-job4]
> > + filename=$SCRATCH_MNT/testfile-job4
> > +
> > + [verify-job5]
> > + filename=$SCRATCH_MNT/testfile-job5
> > +
> > + [verify-job6]
> > + filename=$SCRATCH_MNT/testfile-job6
> > +
> > + [verify-job7]
> > + filename=$SCRATCH_MNT/testfile-job7
> > +
> > + [verify-job8]
> > + filename=$SCRATCH_MNT/testfile-job8
>
> do you really need multiple jobs for verify?
Yes since we want each job to verify it's own file.
>
>
Powered by blists - more mailing lists