[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGWkznGLO7xpQK7E07dLv7ZfO53nx2fn54tVNw7-b46QnzKwkA@mail.gmail.com>
Date: Fri, 25 Mar 2022 11:02:48 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Chris Down <chris@...isdown.name>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
ke wang <ke.wang@...soc.com>,
"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, cgroups@...r.kernel.org
Subject: Re: [RFC PATCH] cgroup: introduce proportional protection on memcg
On Thu, Mar 24, 2022 at 10:27 PM Chris Down <chris@...isdown.name> wrote:
>
> I'm confused by the aims of this patch. We already have proportional reclaim
> for memory.min and memory.low, and memory.high is already "proportional" by its
> nature to drive memory back down behind the configured threshold.
>
> Could you please be more clear about what you're trying to achieve and in what
> way the existing proportional reclaim mechanisms are insufficient for you?
What I am trying to solve is that, the memcg's protection judgment[1]
is based on a set of fixed value on current design, while the real
scan and reclaim number[2] is based on the proportional min/low on the
real memory usage which you mentioned above. Fixed value setting has
some constraints as
1. It is an experienced value based on observation, which could be inaccurate.
2. working load is various from scenarios.
3. fixed value from [1] could be against the dynamic cgroup_size in [2].
shrink_node_memcgs
mem_cgroup_calculate_protection(target_memcg, memcg); \
if (mem_cgroup_below_min(memcg))
\ ===> [1] check if the memcg is protected based on
fixed min/low value
...
/
else if (mem_cgroup_below_low(memcg)) /
...
shrink_lruvec
get_scan_count
\
mem_cgroup_protection
\ ===> [2] calculate the
number of scan size proportionally
scan = lruvec_size - lruvec_size * protection /
(cgroup_size + 1); /
Powered by blists - more mailing lists