[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3E260DAC-2E2F-48B7-98BB-036EF0A423DC@didiglobal.com>
Date: Thu, 8 Dec 2022 14:07:06 +0000
From: 程垲涛 Chengkaitao Cheng
<chengkaitao@...iglobal.com>
To: Michal Hocko <mhocko@...e.com>
CC: chengkaitao <pilgrimtao@...il.com>,
"tj@...nel.org" <tj@...nel.org>,
"lizefan.x@...edance.com" <lizefan.x@...edance.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"corbet@....net" <corbet@....net>,
"roman.gushchin@...ux.dev" <roman.gushchin@...ux.dev>,
"shakeelb@...gle.com" <shakeelb@...gle.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"songmuchun@...edance.com" <songmuchun@...edance.com>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"zhengqi.arch@...edance.com" <zhengqi.arch@...edance.com>,
"ebiederm@...ssion.com" <ebiederm@...ssion.com>,
"Liam.Howlett@...cle.com" <Liam.Howlett@...cle.com>,
"chengzhihao1@...wei.com" <chengzhihao1@...wei.com>,
"haolee.swjtu@...il.com" <haolee.swjtu@...il.com>,
"yuzhao@...gle.com" <yuzhao@...gle.com>,
"willy@...radead.org" <willy@...radead.org>,
"vasily.averin@...ux.dev" <vasily.averin@...ux.dev>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"surenb@...gle.com" <surenb@...gle.com>,
"sfr@...b.auug.org.au" <sfr@...b.auug.org.au>,
"mcgrof@...nel.org" <mcgrof@...nel.org>,
"sujiaxun@...ontech.com" <sujiaxun@...ontech.com>,
"feng.tang@...el.com" <feng.tang@...el.com>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v2] mm: memcontrol: protect the memory in cgroup from
being oom killed
At 2022-12-08 16:14:10, "Michal Hocko" <mhocko@...e.com> wrote:
>On Thu 08-12-22 07:59:27, 程垲涛 Chengkaitao Cheng wrote:
>> At 2022-12-08 15:33:07, "Michal Hocko" <mhocko@...e.com> wrote:
>> >On Thu 08-12-22 11:46:44, chengkaitao wrote:
>> >> From: chengkaitao <pilgrimtao@...il.com>
>> >>
>> >> We created a new interface <memory.oom.protect> for memory, If there is
>> >> the OOM killer under parent memory cgroup, and the memory usage of a
>> >> child cgroup is within its effective oom.protect boundary, the cgroup's
>> >> tasks won't be OOM killed unless there is no unprotected tasks in other
>> >> children cgroups. It draws on the logic of <memory.min/low> in the
>> >> inheritance relationship.
>> >>
>> >> It has the following advantages,
>> >> 1. We have the ability to protect more important processes, when there
>> >> is a memcg's OOM killer. The oom.protect only takes effect local memcg,
>> >> and does not affect the OOM killer of the host.
>> >> 2. Historically, we can often use oom_score_adj to control a group of
>> >> processes, It requires that all processes in the cgroup must have a
>> >> common parent processes, we have to set the common parent process's
>> >> oom_score_adj, before it forks all children processes. So that it is
>> >> very difficult to apply it in other situations. Now oom.protect has no
>> >> such restrictions, we can protect a cgroup of processes more easily. The
>> >> cgroup can keep some memory, even if the OOM killer has to be called.
>> >>
>> >> Signed-off-by: chengkaitao <pilgrimtao@...il.com>
>> >> ---
>> >> v2: Modify the formula of the process request memcg protection quota.
>> >
>> >The new formula doesn't really address concerns expressed previously.
>> >Please read my feedback carefully again and follow up with questions if
>> >something is not clear.
>>
>> The previous discussion was quite scattered. Can you help me summarize
>> your concerns again?
>
>The most important part is http://lkml.kernel.org/r/Y4jFnY7kMdB8ReSW@dhcp22.suse.cz
>: Let me just emphasise that we are talking about fundamental disconnect.
>: Rss based accounting has been used for the OOM killer selection because
>: the memory gets unmapped and _potentially_ freed when the process goes
>: away. Memcg changes are bound to the object life time and as said in
>: many cases there is no direct relation with any process life time.
>
We need to discuss the relationship between memcg's mem and process's mem,
task_usage = task_anon(rss_anon) + task_mapped_file(rss_file)
+ task_mapped_share(rss_share) + task_pgtables + task_swapents
memcg_usage = memcg_anon + memcg_file + memcg_pgtables + memcg_share
= all_task_anon + all_task_mapped_file + all_task_mapped_share
+ all_task_pgtables + unmapped_file + unmapped_share
= all_task_usage + unmapped_file + unmapped_share - all_task_swapents
Memcg is directly related to processes for most memory. On the other hand,
unmapped_file pages and unmapped_share pages aren't charged into the
process, but these memories can not be released by the oom killer. Therefore,
they should not apply to cgroup for protection quota. They can be excluded
during calculation.
memcg A
/ | \
task-x task-y common-cache
2G 2G 2G
eoom.protect(memcg A) = 3G;
usage(memcg A) = 6G
usage(task x) = 2G
usage(task y) = 2G
common-cache = 2G
After calculation,
actual-protection(task x) = 1G
actual-protection(task y) = 1G
This formula is more fair for groups with fewer common-caches (unmapped_
file pages and unmapped_share pages).
In extreme environments, unmapped_file pages and unmapped_share pages
may lock a large share of protection quota, but it is expected.
>That is to the per-process discount based on rss or any per-process
>memory metrics.
>
>Another really important question is the actual configurability. The
>hierarchical protection has to be enforced and that means that same as
>memory reclaim protection it has to be enforced top-to-bottom in the
>cgroup hierarchy. That makes the oom protection rather non-trivial to
>configure without having a good picture of a larger part of the cgroup
>hierarchy as it cannot be tuned based on a reclaim feedback.
There is an essential difference between reclaim and oom killer. The reclaim
cannot be directly perceived by users, so memcg need to count indicators
similar to pgscan_(kswapd/direct). However, when the user process is killed
by oom killer, users can clearly perceive and count (such as the number of
restarts of a certain type of process). At the same time, the kernel also has
memory.events to count some information about the oom killer, which can
also be used for feedback adjustment. Of course, I can also add some
indicators similar to the accumulated memory released by the oom killer
to help users better grasp the dynamics of the oom killer. Do you think it
is valuable?
--
Thanks for your comment!
chengkaitao
Powered by blists - more mailing lists