[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsfwyTHE/5py1kHC@dhcp22.suse.cz>
Date: Fri, 8 Jul 2022 10:54:33 +0200
From: Michal Hocko <mhocko@...e.com>
To: Gang Li <ligang.bdlg@...edance.com>
Cc: akpm@...ux-foundation.org, surenb@...gle.com, hca@...ux.ibm.com,
gor@...ux.ibm.com, agordeev@...ux.ibm.com,
borntraeger@...ux.ibm.com, svens@...ux.ibm.com,
viro@...iv.linux.org.uk, ebiederm@...ssion.com,
keescook@...omium.org, rostedt@...dmis.org, mingo@...hat.com,
peterz@...radead.org, acme@...nel.org, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
namhyung@...nel.org, david@...hat.com, imbrenda@...ux.ibm.com,
adobriyan@...il.com, yang.yang29@....com.cn, brauner@...nel.org,
stephen.s.brennan@...cle.com, zhengqi.arch@...edance.com,
haolee.swjtu@...il.com, xu.xin16@....com.cn,
Liam.Howlett@...cle.com, ohoono.kwon@...sung.com,
peterx@...hat.com, arnd@...db.de, shy828301@...il.com,
alex.sierra@....com, xianting.tian@...ux.alibaba.com,
willy@...radead.org, ccross@...gle.com, vbabka@...e.cz,
sujiaxun@...ontech.com, sfr@...b.auug.org.au,
vasily.averin@...ux.dev, mgorman@...e.de, vvghjk1234@...il.com,
tglx@...utronix.de, luto@...nel.org, bigeasy@...utronix.de,
fenghua.yu@...el.com, linux-s390@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v2 0/5] mm, oom: Introduce per numa node oom for
CONSTRAINT_{MEMORY_POLICY,CPUSET}
On Fri 08-07-22 16:21:24, Gang Li wrote:
> TLDR
> ----
> If a mempolicy or cpuset is in effect, out_of_memory() will select victim
> on specific node to kill. So that kernel can avoid accidental killing on
> NUMA system.
We have discussed this in your previous posting and an alternative
proposal was to use cpusets to partition NUMA aware workloads and
enhance the oom killer to be cpuset aware instead which should be a much
easier solution.
> Problem
> -------
> Before this patch series, oom will only kill the process with the highest
> memory usage by selecting process with the highest oom_badness on the
> entire system.
>
> This works fine on UMA system, but may have some accidental killing on NUMA
> system.
>
> As shown below, if process c.out is bind to Node1 and keep allocating pages
> from Node1, a.out will be killed first. But killing a.out did't free any
> mem on Node1, so c.out will be killed then.
>
> A lot of AMD machines have 8 numa nodes. In these systems, there is a
> greater chance of triggering this problem.
Please be more specific about existing usecases which suffer from the
current OOM handling limitations.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists