[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZcGIPlNCkL6EDx3Z@dread.disaster.area>
Date: Tue, 6 Feb 2024 12:15:42 +1100
From: Dave Chinner <david@...morbit.com>
To: John Garry <john.g.garry@...cle.com>
Cc: "Darrick J. Wong" <djwong@...nel.org>, hch@....de,
viro@...iv.linux.org.uk, brauner@...nel.org, dchinner@...hat.com,
jack@...e.cz, chandan.babu@...cle.com, martin.petersen@...cle.com,
linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org, tytso@....edu, jbongio@...gle.com,
ojaswin@...ux.ibm.com
Subject: Re: [PATCH RFC 5/6] fs: xfs: iomap atomic write support
On Mon, Feb 05, 2024 at 01:36:03PM +0000, John Garry wrote:
> On 02/02/2024 18:47, Darrick J. Wong wrote:
> > On Wed, Jan 24, 2024 at 02:26:44PM +0000, John Garry wrote:
> > > Ensure that when creating a mapping that we adhere to all the atomic
> > > write rules.
> > >
> > > We check that the mapping covers the complete range of the write to ensure
> > > that we'll be just creating a single mapping.
> > >
> > > Currently minimum granularity is the FS block size, but it should be
> > > possibly to support lower in future.
> > >
> > > Signed-off-by: John Garry <john.g.garry@...cle.com>
> > > ---
> > > I am setting this as an RFC as I am not sure on the change in
> > > xfs_iomap_write_direct() - it gives the desired result AFAICS.
> > >
> > > fs/xfs/xfs_iomap.c | 41 +++++++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 41 insertions(+)
> > >
> > > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> > > index 18c8f168b153..758dc1c90a42 100644
> > > --- a/fs/xfs/xfs_iomap.c
> > > +++ b/fs/xfs/xfs_iomap.c
> > > @@ -289,6 +289,9 @@ xfs_iomap_write_direct(
> > > }
> > > }
> > > + if (xfs_inode_atomicwrites(ip))
> > > + bmapi_flags = XFS_BMAPI_ZERO;
We really, really don't want to be doing this during allocation
unless we can avoid it. If the filesystem block size is 64kB, we
could be allocating up to 96GB per extent, and that becomes an
uninterruptable write stream inside a transaction context that holds
inode metadata locked.
IOWs, if the inode is already dirty, this data zeroing effectively
pins the tail of the journal until the data writes complete, and
hence can potentially stall the entire filesystem for that length of
time.
Historical note: XFS_BMAPI_ZERO was introduced for DAX where
unwritten extents are not used for initial allocation because the
direct zeroing overhead is typically much lower than unwritten
extent conversion overhead. It was not intended as a general
purpose "zero data at allocation time" solution primarily because of
how easy it would be to DOS the storage with a single, unkillable
fallocate() call on slow storage.
> > Why do we want to write zeroes to the disk if we're allocating space
> > even if we're not sending an atomic write?
> >
> > (This might want an explanation for why we're doing this at all -- it's
> > to avoid unwritten extent conversion, which defeats hardware untorn
> > writes.)
>
> It's to handle the scenario where we have a partially written extent, and
> then try to issue an atomic write which covers the complete extent.
When/how would that ever happen with the forcealign bits being set
preventing unaligned allocation and writes?
> In this
> scenario, the iomap code will issue 2x IOs, which is unacceptable. So we
> ensure that the extent is completely written whenever we allocate it. At
> least that is my idea.
So return an unaligned extent, and then the IOMAP_ATOMIC checks you
add below say "no" and then the application has to do things the
slow, safe way....
> > I think we should support IOCB_ATOMIC when the mapping is unwritten --
> > the data will land on disk in an untorn fashion, the unwritten extent
> > conversion on IO completion is itself atomic, and callers still have to
> > set O_DSYNC to persist anything.
>
> But does this work for the scenario above?
Probably not, but if we want the mapping to return a single
contiguous extent mapping that spans both unwritten and written
states, then we should directly code that behaviour for atomic
IO and not try to hack around it via XFS_BMAPI_ZERO.
Unwritten extent conversion will already do the right thing in that
it will only convert unwritten regions to written in the larger
range that is passed to it, but if there are multiple regions that
need conversion then the conversion won't be atomic.
> > Then we can avoid the cost of
> > BMAPI_ZERO, because double-writes aren't free.
>
> About double-writes not being free, I thought that this was acceptable to
> just have this write zero when initially allocating the extent as it should
> not add too much overhead in practice, i.e. it's one off.
The whole point about atomic writes is they are a performance
optimisation. If the cost of enabling atomic writes is that we
double the amount of IO we are doing, then we've lost more
performance than we gained by using atomic writes. That doesn't
seem desirable....
>
> >
> > > +
> > > error = xfs_trans_alloc_inode(ip, &M_RES(mp)->tr_write, dblocks,
> > > rblocks, force, &tp);
> > > if (error)
> > > @@ -812,6 +815,44 @@ xfs_direct_write_iomap_begin(
> > > if (error)
> > > goto out_unlock;
> > > + if (flags & IOMAP_ATOMIC) {
> > > + xfs_filblks_t unit_min_fsb, unit_max_fsb;
> > > + unsigned int unit_min, unit_max;
> > > +
> > > + xfs_get_atomic_write_attr(ip, &unit_min, &unit_max);
> > > + unit_min_fsb = XFS_B_TO_FSBT(mp, unit_min);
> > > + unit_max_fsb = XFS_B_TO_FSBT(mp, unit_max);
> > > +
> > > + if (!imap_spans_range(&imap, offset_fsb, end_fsb)) {
> > > + error = -EINVAL;
> > > + goto out_unlock;
> > > + }
> > > +
> > > + if ((offset & mp->m_blockmask) ||
> > > + (length & mp->m_blockmask)) {
> > > + error = -EINVAL;
> > > + goto out_unlock;
> > > + }
That belongs in the iomap DIO setup code, not here. It's also only
checking the data offset/length is filesystem block aligned, not
atomic IO aligned, too.
> > > +
> > > + if (imap.br_blockcount == unit_min_fsb ||
> > > + imap.br_blockcount == unit_max_fsb) {
> > > + /* ok if exactly min or max */
Why? Exact sizing doesn't imply alignment is correct.
> > > + } else if (imap.br_blockcount < unit_min_fsb ||
> > > + imap.br_blockcount > unit_max_fsb) {
> > > + error = -EINVAL;
> > > + goto out_unlock;
Why do this after an exact check?
> > > + } else if (!is_power_of_2(imap.br_blockcount)) {
> > > + error = -EINVAL;
> > > + goto out_unlock;
Why does this matter? If the extent mapping spans a range larger
than was asked for, who cares what size it is as the infrastructure
is only going to do IO for the sub-range in the mapping the user
asked for....
> > > + }
> > > +
> > > + if (imap.br_startoff &&
> > > + imap.br_startoff & (imap.br_blockcount - 1)) {
> >
> > Not sure why we care about the file position, it's br_startblock that
> > gets passed into the bio, not br_startoff.
>
> We just want to ensure that the length of the write is valid w.r.t. to the
> offset within the extent, and br_startoff would be the offset within the
> aligned extent.
I'm not sure why the filesystem extent mapping code needs to care
about IOMAP_ATOMIC like this - the extent allocation behaviour is
determined by the inode forcealign flag, not IOMAP_ATOMIC.
Everything else we have to do is just mapping the offset/len that
was passed to it from the iomap DIO layer. As long as we allocate
with correct alignment and return a mapping that spans the start
offset of the requested range, we've done our job here.
Actually determining if the mapping returned for IO is suitable for
the type of IO we are doing (i.e. IOMAP_ATOMIC) is the
responsibility of the iomap infrastructure. The same checks will
have to be done for every filesystem that implements atomic writes,
so these checks belong in the generic code, not the filesystem
mapping callouts.
-Dave
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists