[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221130125100.80449-1-frank.li@vivo.com>
Date: Wed, 30 Nov 2022 20:51:00 +0800
From: Yangtao Li <frank.li@...o.com>
To: willy@...radead.org
Cc: chao@...nel.org, fengnanchang@...il.com,
linux-f2fs-devel@...ts.sourceforge.net,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, vishal.moola@...il.com
Subject: Re: [PATCH]f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
Hi,
> Thanks for reviewing this. I think the real solution to this is
> that f2fs should be using large folios. That way, the page cache
> will keep track of dirtiness on a per-folio basis, and if your folios
> are at least as large as your cluster size, you won't need to do the
> f2fs_prepare_compress_overwrite() dance. And you'll get at least fifteen
> dirty folios per call instead of fifteen dirty pages, so your costs will
> be much lower.
>
> Is anyone interested in doing the work to convert f2fs to support
> large folios? I can help, or you can look at the work done for XFS,
> AFS and a few other filesystems.
Seems like an interesting job. Not sure if I can be of any help.
What needs to be done currently to support large folio?
Are there any roadmaps and reference documents.
Thx,
Yangtao
Powered by blists - more mailing lists