[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7SV23/k39ygIj8/@casper.infradead.org>
Date: Tue, 3 Jan 2023 20:53:47 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Jaegeuk Kim <jaegeuk@...nel.org>
Cc: "Vishal Moola (Oracle)" <vishal.moola@...il.com>, chao@...nel.org,
linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, linux-mm@...ck.org,
fengnanchang@...il.com, linux-fsdevel@...r.kernel.org
Subject: Re: [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use
filemap_get_folios_tag()
On Thu, Dec 15, 2022 at 11:02:24AM -0800, Jaegeuk Kim wrote:
> On 12/12, Vishal Moola (Oracle) wrote:
> > @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> > tag_pages_for_writeback(mapping, index, end);
> > done_index = index;
> > while (!done && !retry && (index <= end)) {
> > - nr_pages = find_get_pages_range_tag(mapping, &index, end,
> > - tag, F2FS_ONSTACK_PAGES, pages);
> > - if (nr_pages == 0)
> > + nr_pages = 0;
> > +again:
> > + nr_folios = filemap_get_folios_tag(mapping, &index, end,
> > + tag, &fbatch);
>
> Can't folio handle this internally with F2FS_ONSTACK_PAGES and pages?
I really want to discourage filesystems from doing this kind of thing.
The folio_batch is the natural size for doing batches of work, and
having the consistency across all these APIs of passing in a folio_batch
is quite valuable. I understand f2fs wants to get more memory in a
single batch, but the right way to do that is to use larger folios.
Powered by blists - more mailing lists