[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1709071114560.20082@nuc-kabylake>
Date: Thu, 7 Sep 2017 11:18:18 -0500 (CDT)
From: Christopher Lameter <cl@...ux.com>
To: Roman Gushchin <guro@...com>
cc: linux-mm@...ck.org, Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, kernel-team@...com,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [v7 2/5] mm, oom: cgroup-aware OOM killer
On Mon, 4 Sep 2017, Roman Gushchin wrote
> To address these issues, cgroup-aware OOM killer is introduced.
You are missing a major issue here. Processes may have allocation
constraints to memory nodes, special DMA zones etc etc. OOM conditions on
such resource constricted allocations need to be dealt with. Killing
processes that do not allocate with the same restrictions may not do
anything to improve conditions.
> But a user can change this behavior by enabling the per-cgroup
> oom_kill_all_tasks option. If set, it causes the OOM killer treat
> the whole cgroup as an indivisible memory consumer. In case if it's
> selected as on OOM victim, all belonging tasks will be killed.
Sounds good in general. Unless the cgroup or processes therein run out of
memory due to memory access restrictions. How do you detect that and how
it is handled?
Powered by blists - more mailing lists