[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9lt/95kN6kwp+A1@casper.infradead.org>
Date: Tue, 31 Jan 2023 19:37:35 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Andreas Gruenbacher <agruenba@...hat.com>
Cc: Christoph Hellwig <hch@...radead.org>,
"Darrick J . Wong" <djwong@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-ext4@...r.kernel.org, cluster-devel@...hat.com,
Christoph Hellwig <hch@....de>
Subject: Re: [RFC v6 05/10] iomap/gfs2: Get page in page_prepare handler
On Sun, Jan 08, 2023 at 08:40:29PM +0100, Andreas Gruenbacher wrote:
> +static struct folio *
> +gfs2_iomap_page_prepare(struct iomap_iter *iter, loff_t pos, unsigned len)
> {
> + struct inode *inode = iter->inode;
> unsigned int blockmask = i_blocksize(inode) - 1;
> struct gfs2_sbd *sdp = GFS2_SB(inode);
> unsigned int blocks;
> + struct folio *folio;
> + int status;
>
> blocks = ((pos & blockmask) + len + blockmask) >> inode->i_blkbits;
> - return gfs2_trans_begin(sdp, RES_DINODE + blocks, 0);
> + status = gfs2_trans_begin(sdp, RES_DINODE + blocks, 0);
> + if (status)
> + return ERR_PTR(status);
> +
> + folio = iomap_get_folio(iter, pos);
> + if (IS_ERR(folio))
> + gfs2_trans_end(sdp);
> + return folio;
> }
Hi Andreas,
I didn't think to mention this at the time, but I was reading through
buffered-io.c and this jumped out at me. For filesystems which support
folios, we pass the entire length of the write (or at least the length
of the remaining iomap length). That's intended to allow us to decide
how large a folio to allocate at some point in the future.
For GFS2, we do this:
if (!mapping_large_folio_support(iter->inode->i_mapping))
len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos));
I'd like to drop that and pass the full length of the write to
->get_folio(). It looks like you'll have to clamp it yourself at this
point. I am kind of curious why you do one transaction per page --
I would have thought you'd rather do one transaction for the entire write.
Powered by blists - more mailing lists