lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXz3GhxJU15d_ebV@casper.infradead.org>
Date: Fri, 30 Jan 2026 18:23:22 +0000
From: Matthew Wilcox <willy@...radead.org>
To: JP Kobryn <inwardvessel@...il.com>
Cc: Qu Wenruo <quwenruo.btrfs@....com>, boris@....io, clm@...com,
	dsterba@...e.com, linux-btrfs@...r.kernel.org,
	linux-kernel@...r.kernel.org, kernel-team@...a.com,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>
Subject: Re: [RFC PATCH] btrfs: defer freeing of subpage private state to
 free_folio

On Fri, Jan 30, 2026 at 09:10:11AM -0800, JP Kobryn wrote:
> On 1/29/26 9:14 PM, Matthew Wilcox wrote:
> > On Fri, Jan 30, 2026 at 01:46:59PM +1030, Qu Wenruo wrote:
> > > Another question is, why only two fses (nfs for dir inode, and orangefs) are
> > > utilizing the free_folio() callback.
> > 
> > Alas, secretmem and guest_memfd are also using it.  Nevertheless, I'm
> > not a fan of this interface existing, and would prefer to not introduce
> > new users.  Like launder_folio, which btrfs has also mistakenly used.
> 
> The part that felt concerning is how the private state is lost. If
> release_folio() frees this state but the folio persists in the cache,
> users of the folio afterward have to recreate the state. Is that the
> expectation on how filesystems should handle this situation?

Yes; that's how iomap and buffer_head both handle it.  If an operation
happens that needs sub-folio tracking, allocate the per-folio state
then continue as before.

> In the case of the existing btrfs code, when the state is recreated (in
> subpage mode), the bitmap data and lock states are all zeroed.

If the folio is uptodate, iomap will initialise the uptodate bitmap to
all-set rather than all-clear.  The dirty bitmap will similarly reflect
whetheer the folio dirty bit is set or clear.  Obviously we lose some
precision there (the folio may have been only partially dirty, or some of
the blocks in it may already have been uptodate), but that's not likely
to cause any performance problems.  When ->release_folio is called, we're
expecting to evict the folio from the page cache ...  we just failed to
do so, so it's reasonable to treat it as a freshly allocated folio.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ