[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250920104220.1399-1-sj@kernel.org>
Date: Sat, 20 Sep 2025 03:42:20 -0700
From: SeongJae Park <sj@...nel.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: SeongJae Park <sj@...nel.org>,
Xinyu Zheng <zhengxinyu6@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
damon@...ts.linux.dev,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
zouyipeng@...wei.com
Subject: Re: [BUG REPORT] mm/damon: softlockup when kdamond walk page with cpu hotplug
On Fri, 19 Sep 2025 20:56:56 -0700 (PDT) Hugh Dickins <hughd@...gle.com> wrote:
> On Thu, 18 Sep 2025, SeongJae Park wrote:
>
> > Hello,
> >
> > On Thu, 18 Sep 2025 03:00:29 +0000 Xinyu Zheng <zhengxinyu6@...wei.com> wrote:
> >
> > > A softlockup issue was found with stress test:
[...]
> This had me worried for a while: thought we might be needing to change
> lots of other places, and scatter cond_resched()s here and there.
>
> But no: no need for cond_resched()'s, this is all just a confusion about
> where pmd migration entries are handled: a pmd migration entry is accepted
> by pmd_trans_huge_lock(), but is not accepted by pmd_trans_huge().
>
> See fs/proc/task_mmu.c for mm_walk examples of trying pmd_trans_huge_lock(),
> then pte_offset_map_lock() if it failed, or ACTION_AGAIN if that failed too.
>
> When I ACTION_AGAINed damon_mkold_pmd_entry() and damon_young_pmd_entry()
> in 6.5, I didn't realize that the pmd migration entries were reaching the
> pte_offset_map_lock(), with corrupt results (or did pmd_bad() filter them
> out? I didn't think so, but it'll take me too long now to work out whether
> a pmd migration entry counts as pmd_bad or not); but knew that the new
> pte_offset_map_lock() filtered them out safely if there was a race.
>
> But they've been reaching it without any race, so yes the ACTION_AGAIN
> would send the mm_walk back again and again for as long as the pmd
> migration entry remained there: not good, and Xinyu finds a lockup
> when hotplugging CPU without preemption.
Thank you for your detailed and kind explanation, Hugh!
>
> My suggested patch below (please take it over SJ, and do with it what
> you will), converting damon_mkold_pmd_entry() and damon_young_pmd_entry()
> to use pmd_trans_huge_lock() as I'd been expecting, so handling the
> pmd migration entry up in that block. (Side note: patch against 6.17-rc,
> but I see mm.git adds also a damos_va_stat_pmd_entry(), which would
> better be converted to the same pmd_trans_huge_lock() pattern -
> though I notice you're not setting ACTION_AGAIN in that one.)
>
> But I have to admit, there's very little gained by using ACTION_AGAIN
> in these functions: it helps not to miss the range when racing against
> THP collapse or split, but you're already content to miss the extent
> if it has a pmd migration entry, and there can still be an instant when
> the range which used to have a page table does not yet show the THP.
>
> So if you prefer a smaller fix (but a larger source file!), just
> dropping the walk->action = ACTION_AGAIN lines should be good enough.
I agree all your points.
I'd prefer the smaller source file following your suggested change below (using
pmd_trans_huge_lock()) in long term. But, for a short term, I'd prefer the
smaller fix (dropping walk->action = ACTION_AGAIN) since it should also be
merged into stable@, up to 6.5.y.
So, I'd like to suggest as following. Let's drop the
'walk->action = ACTION_AGAIN' like the below attached one, for now. After it
is confirmed to fix the issue and merged into relevant trees including stable
trees, let's revisit the code to cleanup following pmd_trans_huge_lock()
pattern.
Please let me know if I'm missing something, or you have other opinions.
Xinyu, could you please test if the below attached patch fixes your issue and
let us know the result?
If Xinyu confirms the validity of the fix and no one objects to the above plan,
I will post the fix as a formal one with a better commit message.
Thanks,
SJ
[...]
==== >8 ====
>From 743cafda8982624229541741dbfe5ff252328ac0 Mon Sep 17 00:00:00 2001
From: SeongJae Park <sj@...nel.org>
Date: Sat, 20 Sep 2025 03:35:34 -0700
Subject: [PATCH] mm/damon/vaddr: do not try page table walk again
For a quick fix of a softlockup issue:
https://lore.kernel.org/20250918030029.2652607-1-zhengxinyu6@huawei.com
Signed-off-by: SeongJae Park <sj@...nel.org>
---
>From 743cafda8982624229541741dbfe5ff252328ac0 Mon Sep 17 00:00:00 2001
From: SeongJae Park <sj@...nel.org>
Date: Sat, 20 Sep 2025 03:35:34 -0700
Subject: [PATCH] mm/damon/vaddr: do not try page table walk again
For a quick fix of a softlockup issue:
https://lore.kernel.org/20250918030029.2652607-1-zhengxinyu6@huawei.com
Signed-off-by: SeongJae Park <sj@...nel.org>
---
mm/damon/vaddr.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 8c048f9b129e..7e834467b2d8 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -328,10 +328,8 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
}
pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
- if (!pte) {
- walk->action = ACTION_AGAIN;
+ if (!pte)
return 0;
- }
if (!pte_present(ptep_get(pte)))
goto out;
damon_ptep_mkold(pte, walk->vma, addr);
@@ -481,10 +479,8 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
- if (!pte) {
- walk->action = ACTION_AGAIN;
+ if (!pte)
return 0;
- }
ptent = ptep_get(pte);
if (!pte_present(ptent))
goto out;
--
2.39.5
Powered by blists - more mailing lists