[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHc6FU7904K4XrUhOoHp8uoBrDN0kyZ+q54anMXrJUBVCNA29A@mail.gmail.com>
Date: Mon, 26 Jul 2021 09:22:41 +0200
From: Andreas Gruenbacher <agruenba@...hat.com>
To: Andreas Gruenbacher <agruenba@...hat.com>,
Christoph Hellwig <hch@....de>,
"Darrick J . Wong" <djwong@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Huang Jianan <huangjianan@...o.com>,
linux-erofs@...ts.ozlabs.org,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Andreas Gruenbacher <andreas.gruenbacher@...il.com>
Subject: Re: [PATCH v7] iomap: make inline data support more flexible
On Mon, Jul 26, 2021 at 4:36 AM Gao Xiang <hsiangkao@...ux.alibaba.com> wrote:
> On Mon, Jul 26, 2021 at 12:16:39AM +0200, Andreas Gruenbacher wrote:
> > Here's a fixed and cleaned up version that passes fstests on gfs2.
> >
> > I see no reason why the combination of tail packing + writing should
> > cause any issues, so in my opinion, the check that disables that
> > combination in iomap_write_begin_inline should still be removed.
>
> Since there is no such fs for tail-packing write, I just do a wild
> guess, for example,
> 1) the tail-end block was not inlined, so iomap_write_end() dirtied
> the whole page (or buffer) for the page writeback;
> 2) then it was truncated into a tail-packing inline block so the last
> extent(page) became INLINE but dirty instead;
> 3) during the late page writeback for dirty pages,
> if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE))
> would be triggered in iomap_writepage_map() for such dirty page.
>
> As Matthew pointed out before,
> https://lore.kernel.org/r/YPrms0fWPwEZGNAL@casper.infradead.org/
> currently tail-packing inline won't interact with page writeback, but
> I'm afraid a supported tail-packing write fs needs to reconsider the
> whole stuff how page, inode writeback works and what the pattern is
> with the tail-packing.
>
> >
> > It turns out that returning the number of bytes copied from
> > iomap_read_inline_data is a bit irritating: the function is really used
> > for filling the page, but that's not always the "progress" we're looking
> > for. In the iomap_readpage case, we actually need to advance by an
> > antire page, but in the iomap_file_buffered_write case, we need to
> > advance by the length parameter of iomap_write_actor or less. So I've
> > changed that back.
> >
> > I've also renamed iomap_inline_buf to iomap_inline_data and I've turned
> > iomap_inline_data_size_valid into iomap_within_inline_data, which seems
> > more useful to me.
> >
> > Thanks,
> > Andreas
> >
> > --
> >
> > Subject: [PATCH] iomap: Support tail packing
> >
> > The existing inline data support only works for cases where the entire
> > file is stored as inline data. For larger files, EROFS stores the
> > initial blocks separately and then can pack a small tail adjacent to the
> > inode. Generalise inline data to allow for tail packing. Tails may not
> > cross a page boundary in memory.
> >
> > We currently have no filesystems that support tail packing and writing,
> > so that case is currently disabled (see iomap_write_begin_inline). I'm
> > not aware of any reason why this code path shouldn't work, however.
> >
> > Cc: Christoph Hellwig <hch@....de>
> > Cc: Darrick J. Wong <djwong@...nel.org>
> > Cc: Matthew Wilcox <willy@...radead.org>
> > Cc: Andreas Gruenbacher <andreas.gruenbacher@...il.com>
> > Tested-by: Huang Jianan <huangjianan@...o.com> # erofs
> > Signed-off-by: Gao Xiang <hsiangkao@...ux.alibaba.com>
> > ---
> > fs/iomap/buffered-io.c | 34 +++++++++++++++++++++++-----------
> > fs/iomap/direct-io.c | 11 ++++++-----
> > include/linux/iomap.h | 22 +++++++++++++++++++++-
> > 3 files changed, 50 insertions(+), 17 deletions(-)
> >
> > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > index 87ccb3438bec..334bf98fdd4a 100644
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -205,25 +205,29 @@ struct iomap_readpage_ctx {
> > struct readahead_control *rac;
> > };
> >
> > -static void
> > -iomap_read_inline_data(struct inode *inode, struct page *page,
> > +static int iomap_read_inline_data(struct inode *inode, struct page *page,
> > struct iomap *iomap)
> > {
> > - size_t size = i_size_read(inode);
> > + size_t size = i_size_read(inode) - iomap->offset;
>
> I wonder why you use i_size / iomap->offset here,
This function is supposed to copy the inline or tail data at
iomap->inline_data into the page passed to it. Logically, the inline
data starts at iomap->offset and extends until i_size_read(inode).
Relative to the page, the inline data starts at offset 0 and extends
until i_size_read(inode) - iomap->offset. It's as simple as that.
> and why you completely ignoring iomap->length field returning by fs.
In the iomap_readpage case (iomap_begin with flags == 0),
iomap->length will be the amount of data up to the end of the inode.
In the iomap_file_buffered_write case (iomap_begin with flags ==
IOMAP_WRITE), iomap->length will be the size of iomap->inline_data.
(For extending writes, we need to write beyond the current end of
inode.) So iomap->length isn't all that useful for
iomap_read_inline_data.
> Using i_size here instead of iomap->length seems coupling to me in the
> beginning (even currently in practice there is some limitation.)
And what is that?
Thanks,
Andreas
Powered by blists - more mailing lists