[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11922bd5.7fae.1990464d9c8.Coremail.yangshiguang1011@163.com>
Date: Mon, 1 Sep 2025 16:29:02 +0800 (CST)
From: yangshiguang <yangshiguang1011@....com>
To: "Vlastimil Babka" <vbabka@...e.cz>
Cc: "David Rientjes" <rientjes@...gle.com>, harry.yoo@...cle.com,
akpm@...ux-foundation.org, cl@...two.org, roman.gushchin@...ux.dev,
glittao@...il.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
yangshiguang <yangshiguang@...omi.com>, stable@...r.kernel.org
Subject: Re:Re: [PATCH v4] mm: slub: avoid wake up kswapd in
set_track_prepare
At 2025-09-01 16:15:04, "Vlastimil Babka" <vbabka@...e.cz> wrote:
>On 9/1/25 09:50, David Rientjes wrote:
>> On Sat, 30 Aug 2025, yangshiguang1011@....com wrote:
>>
>>> From: yangshiguang <yangshiguang@...omi.com>
>>>
>>> From: yangshiguang <yangshiguang@...omi.com>
>>>
>>
>> Duplicate lines.
>>
>>> set_track_prepare() can incur lock recursion.
>>> The issue is that it is called from hrtimer_start_range_ns
>>> holding the per_cpu(hrtimer_bases)[n].lock, but when enabled
>>> CONFIG_DEBUG_OBJECTS_TIMERS, may wake up kswapd in set_track_prepare,
>>> and try to hold the per_cpu(hrtimer_bases)[n].lock.
>>>
>>> Avoid deadlock caused by implicitly waking up kswapd by
>>> passing in allocation flags. And the slab caller context has
>>> preemption disabled, so __GFP_KSWAPD_RECLAIM must not appear in gfp_flags.
>>>
>>
>> This mentions __GFP_KSWAPD_RECLAIM, but the patch actually masks off
>> __GFP_DIRECT_RECLAIM which would be a heavierweight operation. Disabling
>> direct reclaim does not necessarily imply that kswapd will be disabled as
>> well.
>
>Yeah I think the changelog should say __GFP_DIRECT_RECLAIM.
>
>> Are you meaning to clear __GFP_RECLAIM in set_track_prepare()?
>
>No because if the context context (e.g. the hrtimers) can't support
>__GFP_KSWAPD_RECLAIM it won't have it in gfp_flags and we now pass them to
>set_track_prepare() so it already won't be there.
Sry. Should be __GFP_DIRECT_RECLAIM. I will resend the patch.
Powered by blists - more mailing lists