[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250330064732.3781046-4-mcgrof@kernel.org>
Date: Sat, 29 Mar 2025 23:47:32 -0700
From: Luis Chamberlain <mcgrof@...nel.org>
To: brauner@...nel.org,
jack@...e.cz,
tytso@....edu,
adilger.kernel@...ger.ca,
linux-ext4@...r.kernel.org,
riel@...riel.com
Cc: willy@...radead.org,
hannes@...xchg.org,
oliver.sang@...el.com,
dave@...olabs.net,
david@...hat.com,
axboe@...nel.dk,
hare@...e.de,
david@...morbit.com,
djwong@...nel.org,
ritesh.list@...il.com,
linux-fsdevel@...r.kernel.org,
linux-block@...r.kernel.org,
linux-mm@...ck.org,
gost.dev@...sung.com,
p.raghav@...sung.com,
da.gomez@...sung.com,
mcgrof@...nel.org,
syzbot+f3c6fda1297c748a7076@...kaller.appspotmail.com
Subject: [PATCH 3/3] mm/migrate: avoid atomic context on buffer_migrate_folio_norefs() migration
The buffer_migrate_folio_norefs() should avoid holding the spin lock
held in order to ensure we can support large folios. The prior commit
"fs/buffer: avoid races with folio migrations on __find_get_block_slow()"
ripped out the only rationale for having the atomic context, so we can
remove the spin lock call now.
Reported-by: kernel test robot <oliver.sang@...el.com>
Reported-by: syzbot+f3c6fda1297c748a7076@...kaller.appspotmail.com
Closes: https://lore.kernel.org/oe-lkp/202503101536.27099c77-lkp@intel.com
Fixes: 3c20917120ce ("block/bdev: enable large folio support for large logical block sizes")
Signed-off-by: Luis Chamberlain <mcgrof@...nel.org>
---
mm/migrate.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 712ddd11f3f0..f3047c685706 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -861,12 +861,12 @@ static int __buffer_migrate_folio(struct address_space *mapping,
}
bh = bh->b_this_page;
} while (bh != head);
+ spin_unlock(&mapping->i_private_lock);
if (busy) {
if (invalidated) {
rc = -EAGAIN;
goto unlock_buffers;
}
- spin_unlock(&mapping->i_private_lock);
invalidate_bh_lrus();
invalidated = true;
goto recheck_buffers;
@@ -884,8 +884,6 @@ static int __buffer_migrate_folio(struct address_space *mapping,
} while (bh != head);
unlock_buffers:
- if (check_refs)
- spin_unlock(&mapping->i_private_lock);
bh = head;
do {
unlock_buffer(bh);
--
2.47.2
Powered by blists - more mailing lists