[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zlz+upnpESvduk7L@dread.disaster.area>
Date: Mon, 3 Jun 2024 09:22:34 +1000
From: Dave Chinner <david@...morbit.com>
To: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
Cc: chandan.babu@...cle.com, akpm@...ux-foundation.org, brauner@...nel.org,
willy@...radead.org, djwong@...nel.org,
linux-kernel@...r.kernel.org, hare@...e.de, john.g.garry@...cle.com,
gost.dev@...sung.com, yang@...amperecomputing.com,
p.raghav@...sung.com, cl@...amperecomputing.com,
linux-xfs@...r.kernel.org, hch@....de, mcgrof@...nel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v6 07/11] iomap: fix iomap_dio_zero() for fs bs > system
page size
On Wed, May 29, 2024 at 03:45:05PM +0200, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@...sung.com>
>
> iomap_dio_zero() will pad a fs block with zeroes if the direct IO size
> < fs block size. iomap_dio_zero() has an implicit assumption that fs block
> size < page_size. This is true for most filesystems at the moment.
>
> If the block size > page size, this will send the contents of the page
> next to zero page(as len > PAGE_SIZE) to the underlying block device,
> causing FS corruption.
>
> iomap is a generic infrastructure and it should not make any assumptions
> about the fs block size and the page size of the system.
>
> Signed-off-by: Pankaj Raghav <p.raghav@...sung.com>
> ---
>
> After disucssing a bit in LSFMM about this, it was clear that using a
> PMD sized zero folio might not be a good idea[0], especially in platforms
> with 64k base page size, the huge zero folio can be as high as
> 512M just for zeroing small block sizes in the direct IO path.
>
> The idea to use iomap_init to allocate 64k zero buffer was suggested by
> Dave Chinner as it gives decent tradeoff between memory usage and efficiency.
>
> This is a good enough solution for now as moving beyond 64k block size
> in XFS might take a while. We can work on a more generic solution in the
> future to offer different sized zero folio that can go beyond 64k.
>
> [0] https://lore.kernel.org/linux-fsdevel/ZkdcAsENj2mBHh91@casper.infradead.org/
>
> fs/internal.h | 8 ++++++++
> fs/iomap/buffered-io.c | 5 +++++
> fs/iomap/direct-io.c | 9 +++++++--
> 3 files changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/fs/internal.h b/fs/internal.h
> index 84f371193f74..18eedbb82c50 100644
> --- a/fs/internal.h
> +++ b/fs/internal.h
> @@ -35,6 +35,14 @@ static inline void bdev_cache_init(void)
> int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
> get_block_t *get_block, const struct iomap *iomap);
>
> +/*
> + * iomap/buffered-io.c
> + */
> +
> +#define ZERO_FSB_SIZE (65536)
> +#define ZERO_FSB_ORDER (get_order(ZERO_FSB_SIZE))
> +extern struct page *zero_fs_block;
This is really iomap direct IO private stuff. It should be visible
anywhere else...
> +
> /*
> * char_dev.c
> */
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index c5802a459334..2c0149c827cd 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -42,6 +42,7 @@ struct iomap_folio_state {
> };
>
> static struct bio_set iomap_ioend_bioset;
> +struct page *zero_fs_block;
>
> static inline bool ifs_is_fully_uptodate(struct folio *folio,
> struct iomap_folio_state *ifs)
> @@ -1998,6 +1999,10 @@ EXPORT_SYMBOL_GPL(iomap_writepages);
>
> static int __init iomap_init(void)
> {
> + zero_fs_block = alloc_pages(GFP_KERNEL | __GFP_ZERO, ZERO_FSB_ORDER);
> + if (!zero_fs_block)
> + return -ENOMEM;
> +
> return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> offsetof(struct iomap_ioend, io_bio),
> BIOSET_NEED_BVECS);
just create an iomap_dio_init() function in iomap/direct-io.c
and call that from here. Then everything can be private to
iomap/direct-io.c...
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists