lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 6 Feb 2024 09:53:11 +0000
From: John Garry <john.g.garry@...cle.com>
To: Dave Chinner <david@...morbit.com>
Cc: "Darrick J. Wong" <djwong@...nel.org>, hch@....de, viro@...iv.linux.org.uk,
        brauner@...nel.org, dchinner@...hat.com, jack@...e.cz,
        chandan.babu@...cle.com, martin.petersen@...cle.com,
        linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, tytso@....edu, jbongio@...gle.com,
        ojaswin@...ux.ibm.com
Subject: Re: [PATCH RFC 5/6] fs: xfs: iomap atomic write support

Hi Dave,

>>>> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
>>>> index 18c8f168b153..758dc1c90a42 100644
>>>> --- a/fs/xfs/xfs_iomap.c
>>>> +++ b/fs/xfs/xfs_iomap.c
>>>> @@ -289,6 +289,9 @@ xfs_iomap_write_direct(
>>>>    		}
>>>>    	}
>>>> +	if (xfs_inode_atomicwrites(ip))
>>>> +		bmapi_flags = XFS_BMAPI_ZERO;
> 
> We really, really don't want to be doing this during allocation
> unless we can avoid it. If the filesystem block size is 64kB, we
> could be allocating up to 96GB per extent, and that becomes an
> uninterruptable write stream inside a transaction context that holds
> inode metadata locked.

Where does that 96GB figure come from?

> 
> IOWs, if the inode is already dirty, this data zeroing effectively
> pins the tail of the journal until the data writes complete, and
> hence can potentially stall the entire filesystem for that length of
> time.
> 
> Historical note: XFS_BMAPI_ZERO was introduced for DAX where
> unwritten extents are not used for initial allocation because the
> direct zeroing overhead is typically much lower than unwritten
> extent conversion overhead.  It was not intended as a general
> purpose "zero data at allocation time" solution primarily because of
> how easy it would be to DOS the storage with a single, unkillable
> fallocate() call on slow storage.

Understood

> 
>>> Why do we want to write zeroes to the disk if we're allocating space
>>> even if we're not sending an atomic write?
>>>
>>> (This might want an explanation for why we're doing this at all -- it's
>>> to avoid unwritten extent conversion, which defeats hardware untorn
>>> writes.)
>>
>> It's to handle the scenario where we have a partially written extent, and
>> then try to issue an atomic write which covers the complete extent.
> 
> When/how would that ever happen with the forcealign bits being set
> preventing unaligned allocation and writes?

Consider this scenario:

# mkfs.xfs -r rtdev=/dev/sdb,extsize=64K -d rtinherit=1 /dev/sda
# mount /dev/sda mnt -o rtdev=/dev/sdb
# touch  mnt/file
# /test-pwritev2 -a -d -l 4096 -p 0 /root/mnt/file # direct IO, atomic
write, 4096B at pos 0
# filefrag -v mnt/file
Filesystem type is: 58465342
File size of mnt/file is 4096 (1 block of 4096 bytes)
   ext:     logical_offset:        physical_offset: length:   expected:
flags:
     0:        0..       0:         24..        24:      1:
last,eof
mnt/file: 1 extent found
# /test-pwritev2 -a -d -l 16384 -p 0 /root/mnt/file
wrote -1 bytes at pos 0 write_size=16384
#

For the 2nd write, which would cover a 16KB extent, the iomap code will 
iter twice and produce 2x BIOs, which we don't want - that's why it 
errors there.

With the change in this patch, instead we have something like this after 
the first write:

# /test-pwritev2 -a -d -l 4096 -p 0 /root/mnt/file
wrote 4096 bytes at pos 0 write_size=4096
# filefrag -v mnt/file
Filesystem type is: 58465342
File size of mnt/file is 4096 (1 block of 4096 bytes)
   ext:     logical_offset:        physical_offset: length:   expected:
flags:
     0:        0..       3:         24..        27:      4:
last,eof
mnt/file: 1 extent found
#

So the 16KB extent is in written state and the 2nd 16KB write would iter 
once, producing a single BIO.

> 
>> In this
>> scenario, the iomap code will issue 2x IOs, which is unacceptable. So we
>> ensure that the extent is completely written whenever we allocate it. At
>> least that is my idea.
> 
> So return an unaligned extent, and then the IOMAP_ATOMIC checks you
> add below say "no" and then the application has to do things the
> slow, safe way....

We have been porting atomic write support to some database apps and they 
(database developers) have had to do something like manually zero the 
complete file to get around this issue, but that's not a good user 
experience.

Note that in their case the first 4KB write is non-atomic, but that does 
not change things. They do these 4KB writes in some DB setup phase.

> 
>>> I think we should support IOCB_ATOMIC when the mapping is unwritten --
>>> the data will land on disk in an untorn fashion, the unwritten extent
>>> conversion on IO completion is itself atomic, and callers still have to
>>> set O_DSYNC to persist anything.
>>
>> But does this work for the scenario above?
> 
> Probably not, but if we want the mapping to return a single
> contiguous extent mapping that spans both unwritten and written
> states, then we should directly code that behaviour for atomic
> IO and not try to hack around it via XFS_BMAPI_ZERO.
> 
> Unwritten extent conversion will already do the right thing in that
> it will only convert unwritten regions to written in the larger
> range that is passed to it, but if there are multiple regions that
> need conversion then the conversion won't be atomic.

We would need something atomic.

> 
>>> Then we can avoid the cost of
>>> BMAPI_ZERO, because double-writes aren't free.
>>
>> About double-writes not being free, I thought that this was acceptable to
>> just have this write zero when initially allocating the extent as it should
>> not add too much overhead in practice, i.e. it's one off.
> 
> The whole point about atomic writes is they are a performance
> optimisation. If the cost of enabling atomic writes is that we
> double the amount of IO we are doing, then we've lost more
> performance than we gained by using atomic writes. That doesn't
> seem desirable....

But the zero'ing is a one off per extent allocation, right? I would 
expect just an initial overhead when the database is being created/extended.

Anyway, I did mark this as an RFC for this same reason.

> 
>>
>>>
>>>> +
>>>>    	error = xfs_trans_alloc_inode(ip, &M_RES(mp)->tr_write, dblocks,
>>>>    			rblocks, force, &tp);
>>>>    	if (error)
>>>> @@ -812,6 +815,44 @@ xfs_direct_write_iomap_begin(
>>>>    	if (error)
>>>>    		goto out_unlock;
>>>> +	if (flags & IOMAP_ATOMIC) {
>>>> +		xfs_filblks_t unit_min_fsb, unit_max_fsb;
>>>> +		unsigned int unit_min, unit_max;
>>>> +
>>>> +		xfs_get_atomic_write_attr(ip, &unit_min, &unit_max);
>>>> +		unit_min_fsb = XFS_B_TO_FSBT(mp, unit_min);
>>>> +		unit_max_fsb = XFS_B_TO_FSBT(mp, unit_max);
>>>> +
>>>> +		if (!imap_spans_range(&imap, offset_fsb, end_fsb)) {
>>>> +			error = -EINVAL;
>>>> +			goto out_unlock;
>>>> +		}
>>>> +
>>>> +		if ((offset & mp->m_blockmask) ||
>>>> +		    (length & mp->m_blockmask)) {
>>>> +			error = -EINVAL;
>>>> +			goto out_unlock;
>>>> +		}
> 
> That belongs in the iomap DIO setup code, not here. It's also only
> checking the data offset/length is filesystem block aligned, not
> atomic IO aligned, too.

hmmm... I'm not sure about that. Initially XFS will only support writes 
whose size is a multiple of FS block size, and this is what we are 
checking here, even if it is not obvious.

The idea is that we can first ensure size is a multiple of FS blocksize, 
and then can use br_blockcount directly, below.

> 
>>>> +
>>>> +		if (imap.br_blockcount == unit_min_fsb ||
>>>> +		    imap.br_blockcount == unit_max_fsb) {
>>>> +			/* ok if exactly min or max */
> 
> Why? Exact sizing doesn't imply alignment is correct.

We're not checking alignment specifically, but just checking that the 
size is ok.

> 
>>>> +		} else if (imap.br_blockcount < unit_min_fsb ||
>>>> +			   imap.br_blockcount > unit_max_fsb) {
>>>> +			error = -EINVAL;
>>>> +			goto out_unlock;
> 
> Why do this after an exact check?

And this is a continuation of the size check.

> 
>>>> +		} else if (!is_power_of_2(imap.br_blockcount)) {
>>>> +			error = -EINVAL;
>>>> +			goto out_unlock;
> 
> Why does this matter? If the extent mapping spans a range larger
> than was asked for, who cares what size it is as the infrastructure
> is only going to do IO for the sub-range in the mapping the user
> asked for....

ok, so where would be a better check for power-of-2 write length? In 
iomap DIO code?

I was thinking of doing that, but not so happy with sparse checks.

> 
>>>> +		}
>>>> +
>>>> +		if (imap.br_startoff &&
>>>> +		    imap.br_startoff & (imap.br_blockcount - 1)) {
>>>
>>> Not sure why we care about the file position, it's br_startblock that
>>> gets passed into the bio, not br_startoff.
>>
>> We just want to ensure that the length of the write is valid w.r.t. to the
>> offset within the extent, and br_startoff would be the offset within the
>> aligned extent.
> 
> I'm not sure why the filesystem extent mapping code needs to care
> about IOMAP_ATOMIC like this - the extent allocation behaviour is
> determined by the inode forcealign flag, not IOMAP_ATOMIC.
> Everything else we have to do is just mapping the offset/len that
> was passed to it from the iomap DIO layer. As long as we allocate
> with correct alignment and return a mapping that spans the start
> offset of the requested range, we've done our job here.
> 
> Actually determining if the mapping returned for IO is suitable for
> the type of IO we are doing (i.e. IOMAP_ATOMIC) is the
> responsibility of the iomap infrastructure. The same checks will
> have to be done for every filesystem that implements atomic writes,
> so these checks belong in the generic code, not the filesystem
> mapping callouts.

We can move some of these checks to the core iomap code.

However, the core iomap code does not know FS atomic write min and max 
per inode, so we need some checks here.

Thanks,
John


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ