[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250330064732.3781046-3-mcgrof@kernel.org>
Date: Sat, 29 Mar 2025 23:47:31 -0700
From: Luis Chamberlain <mcgrof@...nel.org>
To: brauner@...nel.org,
jack@...e.cz,
tytso@....edu,
adilger.kernel@...ger.ca,
linux-ext4@...r.kernel.org,
riel@...riel.com
Cc: willy@...radead.org,
hannes@...xchg.org,
oliver.sang@...el.com,
dave@...olabs.net,
david@...hat.com,
axboe@...nel.dk,
hare@...e.de,
david@...morbit.com,
djwong@...nel.org,
ritesh.list@...il.com,
linux-fsdevel@...r.kernel.org,
linux-block@...r.kernel.org,
linux-mm@...ck.org,
gost.dev@...sung.com,
p.raghav@...sung.com,
da.gomez@...sung.com,
mcgrof@...nel.org
Subject: [PATCH 2/3] fs/buffer: avoid races with folio migrations on __find_get_block_slow()
Filesystems which use buffer-heads where it cannot guarantees that the
there are no other references to the folio, for example with a folio
lock, must use buffer_migrate_folio_norefs() for the address space
mapping migrate_folio() callback. There are only 3 filesystems which use
this callback:
1) the block device cache
2) ext4 for its ext4_journalled_aops
3) nilfs2
The commit ebdf4de5642fb6 ("mm: migrate: fix reference check race
between __find_get_block() and migration") added a spin lock to
prevent races with page migration which ext4 users were reporting
through the SUSE bugzilla (bnc#1137609 [0]). Although implicit,
the spinlock is only held for users of buffer_migrate_folio_norefs()
which was added by commit 89cb0888ca148 ("mm: migrate: provide
buffer_migrate_page_norefs()") to support page migration on block
device folios. Later commit dae999602eeb ("ext4: stop providing
.writepage hook") made ext4_journalled_aops use the same callback.
It is worth elaborating on why ext4 journalled aops uses this:
so that buffers cannot be modified under jdb2's hands as that can
cause data corruption. For example when commit code does writeout
of transaction buffers in jbd2_journal_write_metadata_buffer(), we
don't hold page lock or have page writeback bit set or have the
buffer locked. So page migration code would go and happily migrate
the page elsewhere while the copy is running thus corrupting data.
Although we don't have exact traces of the filesystem corruption we
can can reproduce fs corruption one ext4 by just removing the spinlock
and stress testing the filesystem with generic/750, we eventually end
up after 3 hours of testing with kdevops using libvirt on the ext4
profile ext4-4k. Things like the below as reported recently [1]:
Mar 28 03:36:37 extra-ext4-4k unknown: run fstests generic/750 at 2025-03-28 03:36:37
<-- etc -->
Mar 28 05:57:09 extra-ext4-4k kernel: EXT4-fs error (device loop5): ext4_get_first_dir_block:3538: inode #5174: comm fsstress: directory missing '.'
Mar 28 06:04:43 extra-ext4-4k kernel: EXT4-fs warning (device loop5): ext4_empty_dir:3088: inode #5176: comm fsstress: directory missing '.'
Mar 28 06:42:05 extra-ext4-4k kernel: EXT4-fs error (device loop5): __ext4_find_entry:1626: inode #5173: comm fsstress: checksumming directory block 0
Mar 28 08:16:43 extra-ext4-4k kernel: EXT4-fs error (device loop5): ext4_find_extent:938: inode #1104560: comm fsstress: pblk 4932229 bad header/extent: invalid magic - magic 8383, entries 33667, max 33667(0), depth 33667(0)
The block device cache is a user of buffer_migrate_folio_norefs()
and it supports large folios, in that case we can sleep on
folio_mc_copy() on page migration on a cond_resched(). So we want
to avoid requiring a spin lock even on the buffer_migrate_folio_norefs()
case so to enable large folios on buffer-head folio migration.
To address this we must avoid races with folio migration in a
different way.
This provides an alternative by avoiding giving away a folio in
__find_get_block_slow() on folio migration candidates so to enable us
to let us later rip out the spin_lock() held on the folio migration
buffer_migrate_folio_norefs() path. We limit the scope of this sanity
check only for filesystems which cannot provide any guarantees that
there are no references to the folio, so only users of the folio
migration callback buffer_migrate_folio_norefs().
Although we have no direct clear semantics to check if a folio is
being evaluated for folio migration we know that folio migration
happens LRU folios [2]. Since folio migration must not be called
with folio_test_writeback() folios we can skip these folios as well.
The other corner case we can be concerned is for a drive implement
mops, but the docs indicate VM seems to use lru for that too.
A real concern to have here is if the check is starving readers or
writers who want to read a block into the page cache and it
is part of the LRU. The path __getblk_slow() will first try
__find_get_block() which uses __filemap_get_folio() without FGP_CREAT,
and if it fails it will call grow_buffers() which calls again
__filemap_get_folio() but with with FGP_CREAT now, but
__filemap_get_folio() won't create a folio for us if it already exists.
So if the folio was in LRU __getblk_slow() will essentially end up
checking again for the folio until its gone from the page cache or
migration ended, effectively preventing a race with folio migration
which is what we want.
This commit and the subsequent one prove to be an alternative to fix
the filesystem corruption noted above.
Link: https://bugzilla.suse.com/show_bug.cgi?id=1137609 # [0]
Link: https://lkml.kernel.org/r/Z-ZwToVfJbdTVRtG@bombadil.infradead.org # [1]
Link: https://docs.kernel.org/mm/page_migration.html # [2]
Signed-off-by: Luis Chamberlain <mcgrof@...nel.org>
---
fs/buffer.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/fs/buffer.c b/fs/buffer.c
index c7abb4a029dc..a4e4455a6ce2 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -208,6 +208,15 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
head = folio_buffers(folio);
if (!head)
goto out_unlock;
+
+ if (folio->mapping->a_ops->migrate_folio &&
+ folio->mapping->a_ops->migrate_folio == buffer_migrate_folio_norefs) {
+ if (folio_test_lru(folio) &&
+ folio_test_locked(folio) &&
+ !folio_test_writeback(folio))
+ goto out_unlock;
+ }
+
bh = head;
do {
if (!buffer_mapped(bh))
--
2.47.2
Powered by blists - more mailing lists