[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1706061339410.23608@chino.kir.corp.google.com>
Date: Tue, 6 Jun 2017 13:42:29 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Roman Gushchin <guro@...com>
cc: linux-mm@...ck.org, Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Li Zefan <lizefan@...wei.com>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
kernel-team@...com, cgroups@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 1/7] mm, oom: refactor select_bad_process() to
take memcg as an argument
On Tue, 6 Jun 2017, Roman Gushchin wrote:
> Hi David!
>
> Thank you for sharing this!
>
> It's very interesting, and it looks like,
> it's not that far from what I've suggested.
>
> So we definitily need to come up with some common solution.
>
Hi Roman,
Yes, definitely. I could post a series of patches to do everything that
was listed in my email sans the fully inclusive kmem accounting, which may
be pursued at a later date, if it would be helpful to see where there is
common ground?
Another question is what you think about userspace oom handling? We
implement our own oom kill policies in userspace for both the system and
for user-controlled memcg hierarchies because it often does not match the
kernel implementation and there is some action that can be taken other
than killing a process. Have you tried to implement functionality to do
userspace oom handling, or are you considering it? This is the main
motivation behind allowing an oom delay to be configured.
Powered by blists - more mailing lists