[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200225122028.GS22443@dhcp22.suse.cz>
Date: Tue, 25 Feb 2020 13:20:28 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH v2 3/3] mm: memcontrol: recursive memory.low protection
On Fri 21-02-20 10:43:59, Johannes Weiner wrote:
> On Fri, Feb 21, 2020 at 11:11:47AM +0100, Michal Hocko wrote:
[...]
> > I also have hard time to grasp what you actually mean by the above.
> > Let's say you have hiearchy where you split out low limit unevenly
> > root (5G of memory)
> > / \
> > (low 3G) A D (low 1,5G)
> > / \
> > (low 1G) B C (low 2G)
> >
> > B gets lower priority than C and D while C gets higher priority than
> > D? Is there any problem with such a configuration from the semantic
> > point of view?
>
> No, that's completely fine.
How is B (low $EPS) C (low 3-$EPS) where $EPS->0 so much different
from the above. You prioritize C over B, D over B in both cases under
global memory pressure.
[...]
> > > However, that doesn't mean this usecase isn't supported. You *can*
> > > always split cgroups for separate resource policies.
> >
> > What if the split up is not possible or impractical. Let's say you want
> > to control how much CPU share does your container workload get comparing
> > to other containers running on the system? Or let's say you want to
> > treat the whole container as a single entity from the OOM perspective
> > (this would be an example of the logical organization constrain) because
> > you do not want to leave any part of that workload lingering behind if
> > the global OOM kicks in. I am pretty sure there are many other reasons
> > to run related workload that doesn't really share the memory protection
> > demand under a shared cgroup hierarchy.
>
> The problem is that your "pretty sure" has been proven to be very
> wrong in real life. And that's one reason why these arguments are so
> frustrating: it's your intuition and gut feeling against the
> experience of using this stuff hands-on in large scale production
> deployments.
I am pretty sure you have a lot of experiences from the FB workloads.
And I haven't ever questioned that. All I am trying to explore here is
what the consequences of the new proposed semantic are. I have provided
few examples of when an opt-out from memory protection might be
practical. You seem to disagree on relevance of those usecases and I can
live with that. Not that I am fully convinced because there is a
different between a very tight resource control which is your primary
usecase and a much simpler deployments focusing on particular resources
which tend to work most of the time and occasional failures are
acceptable.
That being said, the new interface requires an explicit opt-in via mount
option so there is no risk of regressions. So I can live with it. Please
make sure to document explicitly that the effective low limit protection
doesn't allow to opt-out even when the limit is set to 0 and the
propagated protection is fully assigned to a sibling memcg.
It would be also really appreciated if we have some more specific examples
of priority inversion problems you have encountered previously and place
them somewhere into our documentation. There is essentially nothing like
that in the tree.
Thanks!
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists