[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170824145801.GA23457@castle.DHCP.thefacebook.com>
Date: Thu, 24 Aug 2017 15:58:01 +0100
From: Roman Gushchin <guro@...com>
To: Michal Hocko <mhocko@...nel.org>
CC: <linux-mm@...ck.org>, Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
David Rientjes <rientjes@...gle.com>,
Tejun Heo <tj@...nel.org>, <kernel-team@...com>,
<cgroups@...r.kernel.org>, <linux-doc@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [v6 2/4] mm, oom: cgroup-aware OOM killer
On Thu, Aug 24, 2017 at 04:13:37PM +0200, Michal Hocko wrote:
> On Thu 24-08-17 14:58:42, Roman Gushchin wrote:
> > On Thu, Aug 24, 2017 at 02:58:11PM +0200, Michal Hocko wrote:
> > > On Thu 24-08-17 13:28:46, Roman Gushchin wrote:
> > > > Hi Michal!
> > > >
> > > There is nothing like a "better victim". We are pretty much in a
> > > catastrophic situation when we try to survive by killing a userspace.
> >
> > Not necessary, it can be a cgroup OOM.
>
> memcg OOM is no different. The catastrophic is scoped to the specific
> hierarchy but tasks in that hierarchy still fail to make a further
> progress.
>
> > > We try to kill the largest because that assumes that we return the
> > > most memory from it. Now I do understand that you want to treat the
> > > memcg as a single killable entity but I find it really questionable
> > > to do a per-memcg metric and then do not treat it like that and kill
> > > only a single task. Just imagine a single memcg with zillions of taks
> > > each very small and you select it as the largest while a small taks
> > > itself doesn't help to help to get us out of the OOM.
> >
> > I don't think it's different from a non-containerized state: if you
> > have a zillion of small tasks in the system, you'll meet the same issues.
>
> Yes this is possible but usually you are comparing apples to apples so
> you will kill the largest offender and then go on. To be honest I really
> do hate how we try to kill a children rather than the selected victim
> for the same reason.
I do hate it too.
>
> > > > > I guess I have asked already and we haven't reached any consensus. I do
> > > > > not like how you treat memcgs and tasks differently. Why cannot we have
> > > > > a memcg score a sum of all its tasks?
> > > >
> > > > It sounds like a more expensive way to get almost the same with less accuracy.
> > > > Why it's better?
> > >
> > > because then you are comparing apples to apples?
> >
> > Well, I can say that I compare some number of pages against some other number
> > of pages. And the relation between a page and memcg is more obvious, than a
> > relation between a page and a process.
>
> But you are comparing different accounting systems.
>
> > Both ways are not ideal, and sum of the processes is not ideal too.
> > Especially, if you take oom_score_adj into account. Will you respect it?
>
> Yes, and I do not see any reason why we shouldn't.
It makes things even more complicated.
Right now task's oom_score can be in (~ -total_memory, ~ +2*total_memory) range,
and it you're starting summing it, it can be multiplied by number of tasks...
Weird.
It also will be different in case of system and memcg-wide OOM.
>
> > I've started actually with such approach, but then found it weird.
> >
> > > Besides that you have
> > > to check each task for over-killing anyway. So I do not see any
> > > performance merits here.
> >
> > It's an implementation detail, and we can hopefully get rid of it at some point.
>
> Well, we might do some estimations and ignore oom scopes but I that
> sounds really complicated and error prone. Unless we have anything like
> that then I would start from tasks and build up the necessary to make a
> decision at the higher level.
Seriously speaking, do you have an example, when summing per-process
oom_score will work better?
Especially, if we're talking about customizing oom_score calculation,
it makes no sence to me. How you will sum process timestamps?
>
> > > > > How do you want to compare memcg score with tasks score?
> > > >
> > > > I have to do it for tasks in root cgroups, but it shouldn't be a common case.
> > >
> > > How come? I can easily imagine a setup where only some memcgs which
> > > really do need a kill-all semantic while all others can live with single
> > > task killed perfectly fine.
> >
> > I mean taking a unified cgroup hierarchy into an account, there should not
> > be lot of tasks in the root cgroup, if any.
>
> Is that really the case? I would assume that memory controller would be
> enabled only in those subtrees which really use the functionality and
> the rest will be sitting in the root memcg. It might be the case if you
> are running only containers but I am not really sure this is true in
> general.
Agree.
Powered by blists - more mailing lists