[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZzKn4OyHXq5r6eiI@dread.disaster.area>
Date: Tue, 12 Nov 2024 11:57:04 +1100
From: Dave Chinner <david@...morbit.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: linux-mm@...ck.org, linux-fsdevel@...r.kernel.org, hannes@...xchg.org,
clm@...a.com, linux-kernel@...r.kernel.org, willy@...radead.org,
kirill@...temov.name, linux-btrfs@...r.kernel.org,
linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org
Subject: Re: [PATCH 10/16] mm/filemap: make buffered writes work with
RWF_UNCACHED
On Mon, Nov 11, 2024 at 04:37:37PM -0700, Jens Axboe wrote:
> If RWF_UNCACHED is set for a write, mark new folios being written with
> uncached. This is done by passing in the fact that it's an uncached write
> through the folio pointer. We can only get there when IOCB_UNCACHED was
> allowed, which can only happen if the file system opts in. Opting in means
> they need to check for the LSB in the folio pointer to know if it's an
> uncached write or not. If it is, then FGP_UNCACHED should be used if
> creating new folios is necessary.
>
> Uncached writes will drop any folios they create upon writeback
> completion, but leave folios that may exist in that range alone. Since
> ->write_begin() doesn't currently take any flags, and to avoid needing
> to change the callback kernel wide, use the foliop being passed in to
> ->write_begin() to signal if this is an uncached write or not. File
> systems can then use that to mark newly created folios as uncached.
>
> Add a helper, generic_uncached_write(), that generic_file_write_iter()
> calls upon successful completion of an uncached write.
This doesn't implement an "uncached" write operation. This
implements a cache write-through operation.
We've actually been talking about this for some time as a desirable
general buffered write trait on fast SSDs. Excessive write-behind
caching is a real problem in general, especially when doing
streaming sequential writes to pcie 4 and 5 nvme SSDs that can do
more than 7GB/s to disk. When the page cache fills up, we see all
the same problems you are trying to work around in this series
with "uncached" writes.
IOWS, what we really want is page cache write-through as an
automatic feature for buffered writes.
> @@ -70,6 +71,34 @@ static inline int filemap_write_and_wait(struct address_space *mapping)
> return filemap_write_and_wait_range(mapping, 0, LLONG_MAX);
> }
>
> +/*
> + * generic_uncached_write - start uncached writeback
> + * @iocb: the iocb that was written
> + * @written: the amount of bytes written
> + *
> + * When writeback has been handled by write_iter, this helper should be called
> + * if the file system supports uncached writes. If %IOCB_UNCACHED is set, it
> + * will kick off writeback for the specified range.
> + */
> +static inline void generic_uncached_write(struct kiocb *iocb, ssize_t written)
> +{
> + if (iocb->ki_flags & IOCB_UNCACHED) {
> + struct address_space *mapping = iocb->ki_filp->f_mapping;
> +
> + /* kick off uncached writeback */
> + __filemap_fdatawrite_range(mapping, iocb->ki_pos,
> + iocb->ki_pos + written, WB_SYNC_NONE);
> + }
> +}
Yup, this is basically write-through.
> +
> +/*
> + * Value passed in to ->write_begin() if IOCB_UNCACHED is set for the write,
> + * and the ->write_begin() handler on a file system supporting FOP_UNCACHED
> + * must check for this and pass FGP_UNCACHED for folio creation.
> + */
> +#define foliop_uncached ((struct folio *) 0xfee1c001)
> +#define foliop_is_uncached(foliop) (*(foliop) == foliop_uncached)
> +
> /**
> * filemap_set_wb_err - set a writeback error on an address_space
> * @mapping: mapping in which to set writeback error
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 40debe742abe..0d312de4e20c 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -430,6 +430,7 @@ int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start,
>
> return filemap_fdatawrite_wbc(mapping, &wbc);
> }
> +EXPORT_SYMBOL_GPL(__filemap_fdatawrite_range);
>
> static inline int __filemap_fdatawrite(struct address_space *mapping,
> int sync_mode)
> @@ -4076,7 +4077,7 @@ ssize_t generic_perform_write(struct kiocb *iocb, struct iov_iter *i)
> ssize_t written = 0;
>
> do {
> - struct folio *folio;
> + struct folio *folio = NULL;
> size_t offset; /* Offset into folio */
> size_t bytes; /* Bytes to write to folio */
> size_t copied; /* Bytes copied from user */
> @@ -4104,6 +4105,16 @@ ssize_t generic_perform_write(struct kiocb *iocb, struct iov_iter *i)
> break;
> }
>
> + /*
> + * If IOCB_UNCACHED is set here, we now the file system
> + * supports it. And hence it'll know to check folip for being
> + * set to this magic value. If so, it's an uncached write.
> + * Whenever ->write_begin() changes prototypes again, this
> + * can go away and just pass iocb or iocb flags.
> + */
> + if (iocb->ki_flags & IOCB_UNCACHED)
> + folio = foliop_uncached;
> +
> status = a_ops->write_begin(file, mapping, pos, bytes,
> &folio, &fsdata);
> if (unlikely(status < 0))
> @@ -4234,8 +4245,10 @@ ssize_t generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
> ret = __generic_file_write_iter(iocb, from);
> inode_unlock(inode);
>
> - if (ret > 0)
> + if (ret > 0) {
> + generic_uncached_write(iocb, ret);
> ret = generic_write_sync(iocb, ret);
Why isn't the writethrough check inside generic_write_sync()?
Having to add it to every filesystem that supports write-through is
unwieldy. If the IO is DSYNC or SYNC, we're going to run WB_SYNC_ALL
writeback through the generic_write_sync() path already, so the only time we
actually want to run WB_SYNC_NONE write-through here is if the iocb
is not marked as dsync.
Hence I think this write-through check should be done conditionally
inside generic_write_sync(), not in addition to the writeback
generic_write_sync() might need to do...
That also gives us a common place for adding cache write-through
trigger logic (think writebehind trigger logic similar to readahead)
and this is also a place where we could automatically tag mapping
ranges for reclaim on writeback completion....
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists