[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170907145239.GA19022@castle.DHCP.thefacebook.com>
Date: Thu, 7 Sep 2017 15:52:39 +0100
From: Roman Gushchin <guro@...com>
To: Christopher Lameter <cl@...ux.com>
CC: David Rientjes <rientjes@...gle.com>, <nzimmer@....com>,
<holt@....com>, Michal Hocko <mhocko@...nel.org>,
<linux-mm@...ck.org>, Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, <kernel-team@...com>,
<cgroups@...r.kernel.org>, <linux-doc@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <sivanich@....com>
Subject: Re: [v7 5/5] mm, oom: cgroup v2 mount option to disable cgroup-aware
OOM killer
On Thu, Sep 07, 2017 at 09:43:30AM -0500, Christopher Lameter wrote:
> On Wed, 6 Sep 2017, David Rientjes wrote:
>
> > > The oom_kill_allocating_task sysctl which causes the OOM killer
> > > to simple kill the allocating task is useless. Killing the random
> > > task is not the best idea.
> > >
> > > Nobody likes it, and hopefully nobody uses it.
> > > We want to completely deprecate it at some point.
> > >
> >
> > SGI required it when it was introduced simply to avoid the very expensive
> > tasklist scan. Adding Christoph Lameter to the cc since he was involved
> > back then.
>
> Really? From what I know and worked on way back when: The reason was to be
> able to contain the affected application in a cpuset. Multiple apps may
> have been running in multiple cpusets on a large NUMA machine and the OOM
> condition in one cpuset should not affect the other. It also helped to
> isolate the application behavior causing the oom in numerous cases.
>
> Doesnt this requirement transfer to cgroups in the same way?
We have per-node memory stats and plan to use them during the OOM victim
selection. Hopefully it can help.
>
> Left SGI in 2008 so adding Dimitri who may know about the current
> situation. Robin Holt also left SGI as far as I know.
Thanks!
Powered by blists - more mailing lists