[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1709210125150.10026@chino.kir.corp.google.com>
Date: Thu, 21 Sep 2017 01:27:29 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Roman Gushchin <guro@...com>
cc: Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, kernel-team@...com,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [v8 0/4] cgroup-aware OOM killer
On Wed, 20 Sep 2017, Roman Gushchin wrote:
> > It's actually much more complex because in our environment we'd need an
> > "activity manager" with CAP_SYS_RESOURCE to control oom priorities of user
> > subcontainers when today it need only be concerned with top-level memory
> > cgroups. Users can create their own hierarchies with their own oom
> > priorities at will, it doesn't alter the selection heuristic for another
> > other user running on the same system and gives them full control over the
> > selection in their own subtree. We shouldn't need to have a system-wide
> > daemon with CAP_SYS_RESOURCE be required to manage subcontainers when
> > nothing else requires it. I believe it's also much easier to document:
> > oom_priority is considered for all sibling cgroups at each level of the
> > hierarchy and the cgroup with the lowest priority value gets iterated.
>
> I do agree actually. System-wide OOM priorities make no sense.
>
> Always compare sibling cgroups, either by priority or size, seems to be
> simple, clear and powerful enough for all reasonable use cases. Am I right,
> that it's exactly what you've used internally? This is a perfect confirmation,
> I believe.
>
We've used it for at least four years, I added my Tested-by to your patch,
we would convert to your implementation if it is merged upstream, and I
would enthusiastically support your patch if you would integrate it back
into your series.
Powered by blists - more mailing lists