[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZowLTDJG_i2ILmx7@x1n>
Date: Mon, 8 Jul 2024 11:52:44 -0400
From: Peter Xu <peterx@...hat.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Zi Yan <ziy@...dia.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Huang Ying <ying.huang@...el.com>,
David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH hotfix] mm/migrate: fix kernel BUG at
mm/compaction.c:2761!
On Tue, Jun 11, 2024 at 10:06:20PM -0700, Hugh Dickins wrote:
> I hit the VM_BUG_ON(!list_empty(&cc->migratepages)) in compact_zone();
> and if DEBUG_VM were off, then pages would be lost on a local list.
>
> Our convention is that if migrate_pages() reports complete success (0),
> then the migratepages list will be empty; but if it reports an error or
> some pages remaining, then its caller must putback_movable_pages().
>
> There's a new case in which migrate_pages() has been reporting complete
> success, but returning with pages left on the migratepages list: when
> migrate_pages_batch() successfully split a folio on the deferred list,
> but then the "Failure isn't counted" call does not dispose of them all.
>
> Since that block is expecting the large folio to have been counted as 1
> failure already, and since the return code is later adjusted to success
> whenever the returned list is found empty, the simple way to fix this
> safely is to count splitting the deferred folio as "a failure".
>
> Fixes: 7262f208ca68 ("mm/migrate: split source folio if it is on deferred split list")
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> ---
> A hotfix to 6.10-rc, not needed for stable.
>
> mm/migrate.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1654,7 +1654,12 @@ static int migrate_pages_batch(struct list_head *from,
>
> /*
> * The rare folio on the deferred split list should
> - * be split now. It should not count as a failure.
> + * be split now. It should not count as a failure:
> + * but increment nr_failed because, without doing so,
> + * migrate_pages() may report success with (split but
> + * unmigrated) pages still on its fromlist; whereas it
> + * always reports success when its fromlist is empty.
> + *
> * Only check it without removing it from the list.
> * Since the folio can be on deferred_split_scan()
> * local list and removing it can cause the local list
> @@ -1669,6 +1674,7 @@ static int migrate_pages_batch(struct list_head *from,
> if (nr_pages > 2 &&
> !list_empty(&folio->_deferred_list)) {
> if (try_split_folio(folio, split_folios) == 0) {
> + nr_failed++;
> stats->nr_thp_split += is_thp;
> stats->nr_split++;
> continue;
> --
> 2.35.3
>
>
We probably hit the same issue in our testbeds, but in the other
migrate_misplaced_folio() path, which contains the BUG_ON() rather than
VM_BUG_ON(). Looks like this patch can also fix that.
When looking at that, I wonder whether we overlooked one more spot where we
mostly always use putback_movable_pages() for migrate failures, but didn't
in migrate_misplaced_folio(). I feel like it was overlooked but want to
check with all of you here, as I do think the folio can already be split
when reaching here too. So I wonder whether below would make sense as a fix
from that POV.
===8<===
diff --git a/mm/migrate.c b/mm/migrate.c
index e10d2445fbd8..20da2595527a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2615,14 +2615,8 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio,
NULL, node, MIGRATE_ASYNC,
MR_NUMA_MISPLACED, &nr_succeeded);
- if (nr_remaining) {
- if (!list_empty(&migratepages)) {
- list_del(&folio->lru);
- node_stat_mod_folio(folio, NR_ISOLATED_ANON +
- folio_is_file_lru(folio), -nr_pages);
- folio_putback_lru(folio);
- }
- }
+ if (nr_remaining && !list_empty(&migratepages))
+ putback_movable_pages(&migratepages);
if (nr_succeeded) {
count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node))
===8<===
Thanks,
--
Peter Xu
Powered by blists - more mailing lists