[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190128125151.GI18811@dhcp22.suse.cz>
Date: Mon, 28 Jan 2019 13:51:51 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Tejun Heo <tj@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Chris Down <chris@...isdown.name>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>, Dennis Zhou <dennis@...nel.org>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, kernel-team@...com
Subject: Re: [PATCH 2/2] mm: Consider subtrees in memory.events
On Fri 25-01-19 10:28:08, Tejun Heo wrote:
> Hello, Michal.
>
> On Fri, Jan 25, 2019 at 06:37:13PM +0100, Michal Hocko wrote:
> > > What if a user wants to monitor any ooms in the subtree tho, which is
> > > a valid use case?
> >
> > How is that information useful without know which memcg the oom applies
> > to?
>
> For example, a workload manager watching over a subtree for a job with
> nested memory limits set by the job itself. It wants to take action
> (reporting and possibly other remediative actions) when something goes
> wrong in the delegated subtree but isn't involved in how the subtree
> is configured inside.
Yes, I understand this part, but it is not clear to me, _how_ to report
anything sensible without knowing _what_ has caused the event. You can
walk the cgroup hierarchy and compare cached results with new ones but
this is a) racy and b) clumsy.
> > > If local event monitoring is useful and it can be,
> > > let's add separate events which are clearly identifiable to be local.
> > > Right now, it's confusing like hell.
> >
> > From a backward compatible POV it should be a new interface added.
>
> That sure is an option for use cases like above but it has the
> downside of carrying over the confusing interface into the indefinite
> future.
I actually believe that this is not such a big deal. For one thing the
current events are actually helpful to watch the reclaim/setup behavior.
> Again, I'd like to point back at how we changed the
> accounting write and trim accounting because the benefits outweighted
> the risks.
>
> > Please note that I understand that this might be confusing with the rest
> > of the cgroup APIs but considering that this is the first time somebody
> > is actually complaining and the interface is "production ready" for more
> > than three years I am not really sure the situation is all that bad.
>
> cgroup2 uptake hasn't progressed that fast. None of the major distros
> or container frameworks are currently shipping with it although many
> are evaluating switching. I don't think I'm too mistaken in that we
> (FB) are at the bleeding edge in terms of adopting cgroup2 and its
> various new features and are hitting these corner cases and oversights
> in the process. If there are noticeable breakages arising from this
> change, we sure can backpaddle but I think the better course of action
> is fixing them up while we can.
I do not really think you can go back. You cannot simply change semantic
back and forth because you just break new users.
Really, I do not see the semantic changing after more than 3 years of
production ready interface. If you really believe we need a hierarchical
notification mechanism for the reclaim activity then add a new one.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists