[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <729aa946-7609-890a-3a13-4b0a58359aaa@gmail.com>
Date: Fri, 18 Jul 2025 11:30:03 +0800
From: Hao Jia <jiahao.kernel@...il.com>
To: Yuanchu Xie <yuanchu@...gle.com>
Cc: akpm@...ux-foundation.org, yuzhao@...gle.com, shakeel.butt@...ux.dev,
mhocko@...nel.org, lorenzo.stoakes@...cle.com, kinseyho@...gle.com,
hannes@...xchg.org, gthelen@...gle.com, david@...hat.com,
axelrasmussen@...gle.com, zhengqi.arch@...edance.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Hao Jia <jiahao1@...iang.com>
Subject: Re: [PATCH] mm/mglru: Update MG-LRU proactive reclaim statistics only
to memcg
On 2025/7/18 04:18, Yuanchu Xie wrote:
Hi Yuanchu,
> Hi Hao,
>
> On Thu, Jul 17, 2025 at 1:29 AM Hao Jia <jiahao.kernel@...il.com> wrote:
>>
>> From: Hao Jia <jiahao1@...iang.com>
>>
>> Users can use /sys/kernel/debug/lru_gen to trigger proactive memory reclaim
>> of a specified memcg. Currently, statistics such as pgrefill, pgscan and
>> pgsteal will be updated to the /proc/vmstat system memory statistics.
>
> This is a debugfs interface and it's not meant for use in production
> or provide a stable ABI. Does memory.reclaim not work for your needs?
>
No, I am comparing the two interfaces.
Thanks for your reminder, but I want to use this interface run_aging()
to age folios, and separate proactive memory reclamation from multiple
walk_mm() by combining it with BIT(LRU_GEN_MM_WALK).
For example, user-space agent, enable LRU_GEN_MM_WALK, and then trigger
run_aging(). Then turn off LRU_GEN_MM_WALK and use cgroup.reclaim to
trigger proactive reclamation. Avoid the long latency caused by walk_mm().
Maybe it would be more reasonable to put walk_mm() in workqueues?
I don't know if my idea is reasonable, any suggestions are welcome.
Thanks,
Hao
> I'm not against the change; I just hope you don't depend on it
> continuing to exist/behave a certain way.
>
> Shakeel's comment is accurate. The lru_gen interface uses the internal
> memcg id which is not usually used to interface with the userspace.
> Reading this file does show the cgroup path and memcg id association.
>
>>
>> This will confuse some system memory pressure monitoring tools, making
>> it difficult to determine whether pgscan and pgsteal are caused by
>> system-level pressure or by proactive memory reclaim of some specific
>> memory cgroup.
>>
>> Therefore, make this interface behave similarly to memory.reclaim.
>> Update proactive memory reclaim statistics only to its memory cgroup.
>>
>> Signed-off-by: Hao Jia <jiahao1@...iang.com>
>
> The patch looks okay to me too.
>
> Thanks,
> Yuanchu
Powered by blists - more mailing lists