[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b8c1a314-13ad-e610-31e4-fa931531aea9@gmail.com>
Date: Wed, 19 Mar 2025 10:38:01 +0800
From: Hao Jia <jiahao.kernel@...il.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: hannes@...xchg.org, akpm@...ux-foundation.org, tj@...nel.org,
corbet@....net, mhocko@...nel.org, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
Hao Jia <jiahao1@...iang.com>
Subject: Re: [PATCH 1/2] mm: vmscan: Split proactive reclaim statistics from
direct reclaim statistics
On 2025/3/18 20:59, Michal Koutný wrote:
> On Tue, Mar 18, 2025 at 08:03:44PM +0800, Hao Jia <jiahao.kernel@...il.com> wrote:
>>> How silly is it to have multiple memory.reclaim writers?
>>> Would it make sense to bind those statistics to each such a write(r)
>>> instead of the aggregated totals?
>>
>>
>> I'm sorry, I didn't understand what your suggestion was conveying.
>
> For instance one reclaimer for page cache and another for anon (in one
> memcg):
> echo "1G swappiness=0" >memory.reclaim &
> echo "1G swappiness=200" >memory.reclaim
>
Thank you for your suggestion.
However, binding the statistics to the memory.reclaim writers may not be
suitable for our scenario. The userspace proactive memory reclaimer
triggers proactive memory reclaim on different memory cgroups, and all
memory reclaim statistics would be tied to this userspace proactive
memory reclaim process. This does not distinguish the proactive memory
reclaim status of different cgroups.
Thanks,
Hao
Powered by blists - more mailing lists