[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dfe192a8-4a88-407a-83c2-976d953a2227@kernel.org>
Date: Mon, 24 Nov 2025 10:14:20 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Zhiheng Tao <junchuan.tzh@...group.com>, akpm@...ux-foundation.org,
lorenzo.stoakes@...cle.com
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
npache@...hat.com, ryan.roberts@....com, dev.jain@....com,
baohua@...nel.org, lance.yang@...ux.dev, shy828301@...il.com,
zokeefe@...gle.com, peterx@...hat.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/khugepaged: Fix skipping of alloc sleep after second
failure
On 11/24/25 07:19, Zhiheng Tao wrote:
> In khugepaged_do_scan(), two consecutive allocation failures cause
> the logic to skip the dedicated 60s throttling sleep
> (khugepaged_alloc_sleep_millisecs), forcing a fallback to the
> shorter 10s scanning interval via the outer loop
>
> Since fragmentation is unlikely to resolve in 10s, this results in
> wasted CPU cycles on immediate retries.
Why shouldn't memory comapction be able to compact a single THP in 10s?
Why should it resolve in 60s?
>
> Reorder the failure logic to ensure khugepaged_alloc_sleep() is
> always called on each allocation failure.
>
> Fixes: c6a7f445a272 ("mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA")
What are we fixing here? This sounds like a change that might be better
on some systems, but worse on others?
We really need more information on when/how an issue was hit, and how
this patch here really moves the needle in any way.
--
Cheers
David
Powered by blists - more mailing lists