[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0af205d9-6093-4931-abe9-f236acae8d44@oracle.com>
Date: Fri, 25 Jul 2025 09:14:25 +0100
From: John Garry <john.g.garry@...cle.com>
To: Ojaswin Mujoo <ojaswin@...ux.ibm.com>
Cc: Zorro Lang <zlang@...hat.com>, fstests@...r.kernel.org,
Ritesh Harjani <ritesh.list@...il.com>, djwong@...nel.org,
tytso@....edu, linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-ext4@...r.kernel.org
Subject: Re: [PATCH v3 05/13] generic/1226: Add atomic write test using fio
crc check verifier
On 25/07/2025 07:27, Ojaswin Mujoo wrote:
> On Wed, Jul 23, 2025 at 05:25:41PM +0100, John Garry wrote:
>> On 23/07/2025 14:51, Ojaswin Mujoo wrote:
>>>>> No, its just something i hardcoded for that particular run. This patch
>>>>> doesn't enforce hardware only atomic writes
>>>> If we are to test this for XFS then we need to ensure that HW atomics are
>>>> available.
>>> Why is that? Now with the verification step happening after writes,
>>> software atomic writes should also pass this test since there are no
>>> racing writes to the verify reads.
>> Sure, but racing software atomic writes against other software atomic writes
>> is not safe.
>>
>> Thanks,
>> John
> What do you mean by not safe?
Multiple threads issuing atomic writes may trample over one another.
It is due to the steps used to issue an atomic write in xfs by software
method. Here we do 3x steps:
a. allocate blocks for out-of-place write
b. do write in those blocks
c. atomically update extent mapping.
In this, threads wanting to atomic write to the same address will use
the new blocks and can trample over one another before we atomically
update the mapping.
So we do not guarantee serialization of atomic writes vs atomic writes.
And this is why I said that this test is never totally safe for xfs.
We could change this simply to have serialization of software-based
atomic writes against all other dio, like follows:
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -747,6 +747,7 @@ xfs_file_dio_write_atomic(
unsigned int iolock = XFS_IOLOCK_SHARED;
ssize_t ret, ocount = iov_iter_count(from);
const struct iomap_ops *dops;
+ unsigned int dio_flags = 0;
/*
* HW offload should be faster, so try that first if it is already
@@ -766,15 +767,12 @@ xfs_file_dio_write_atomic(
if (ret)
goto out_unlock;
- /* Demote similar to xfs_file_dio_write_aligned() */
- if (iolock == XFS_IOLOCK_EXCL) {
- xfs_ilock_demote(ip, XFS_IOLOCK_EXCL);
- iolock = XFS_IOLOCK_SHARED;
- }
+ if (dio_flags & IOMAP_DIO_FORCE_WAIT)
+ inode_dio_wait(VFS_I(ip));
trace_xfs_file_direct_write(iocb, from);
ret = iomap_dio_rw(iocb, from, dops, &xfs_dio_write_ops,
- 0, NULL, 0);
+ dio_flags, NULL, 0);
/*
* The retry mechanism is based on the ->iomap_begin method
returning
@@ -785,6 +783,8 @@ xfs_file_dio_write_atomic(
if (ret == -ENOPROTOOPT && dops == &xfs_direct_write_iomap_ops) {
xfs_iunlock(ip, iolock);
dops = &xfs_atomic_write_cow_iomap_ops;
+ iolock = XFS_IOLOCK_EXCL;
+ dio_flags = IOMAP_DIO_FORCE_WAIT;
goto retry;
}
But it may affect performance.
> Does it mean the test can fail?
Yes, but it is unlikely if we have HW atomics available. That is because
we will rarely be using software-based atomic method, as HW method
should often be possible.
Powered by blists - more mailing lists