[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230602230552.350731-3-peterx@redhat.com>
Date: Fri, 2 Jun 2023 19:05:50 -0400
From: Peter Xu <peterx@...hat.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: David Hildenbrand <david@...hat.com>,
Alistair Popple <apopple@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Johannes Weiner <hannes@...xchg.org>,
John Hubbard <jhubbard@...dia.com>,
Naoya Horiguchi <naoya.horiguchi@....com>, peterx@...hat.com,
Muhammad Usama Anjum <usama.anjum@...labora.com>,
Hugh Dickins <hughd@...gle.com>,
Mike Rapoport <rppt@...nel.org>
Subject: [PATCH 2/4] mm/migrate: Unify and retry an unstable pmd when hit
There's one pmd_bad() check, but should be better to use pmd_clear_bad()
which is part of pmd_trans_unstable().
And I assume that's not enough, because there can be race of thp insertion
when reaching pmd_bad(), so it can be !bad but a thp, then the walk is
illegal.
There's one case though where the function used pmd_trans_unstable() but
only for pmd split. Merge them into one, and if it happens retry the whole
pmd.
Cc: Alistair Popple <apopple@...dia.com>
Cc: John Hubbard <jhubbard@...dia.com>
Signed-off-by: Peter Xu <peterx@...hat.com>
---
mm/migrate_device.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index d30c9de60b0d..6fc54c053c05 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -83,9 +83,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
if (is_huge_zero_page(page)) {
spin_unlock(ptl);
split_huge_pmd(vma, pmdp, addr);
- if (pmd_trans_unstable(pmdp))
- return migrate_vma_collect_skip(start, end,
- walk);
} else {
int ret;
@@ -106,8 +103,10 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
}
}
- if (unlikely(pmd_bad(*pmdp)))
- return migrate_vma_collect_skip(start, end, walk);
+ if (unlikely(pmd_trans_unstable(pmdp))) {
+ walk->action = ACTION_AGAIN;
+ return 0;
+ }
ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
arch_enter_lazy_mmu_mode();
--
2.40.1
Powered by blists - more mailing lists