[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200226164632.GL27066@blackbody.suse.cz>
Date: Wed, 26 Feb 2020 17:46:32 +0100
From: Michal Koutný <mkoutny@...e.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...e.com>,
Tejun Heo <tj@...nel.org>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH v2 2/3] mm: memcontrol: clean up and document effective
low/min calculations
On Tue, Feb 25, 2020 at 01:40:14PM -0500, Johannes Weiner <hannes@...xchg.org> wrote:
> Hm, this example doesn't change with my patch because there is no
> "floating" protection that gets distributed among the siblings.
Maybe it had changed even earlier and the example obsoleted.
> In my testing with the above parameters, the equilibrium still comes
> out to roughly this distribution.
I'm attaching my test (10-times smaller) and I'm getting these results:
> /sys/fs/cgroup/test.slice/memory.current:838750208
> /sys/fs/cgroup/test.slice/pressure.service/memory.current:616972288
> /sys/fs/cgroup/test.slice/test-A.slice/memory.current:221782016
> /sys/fs/cgroup/test.slice/test-A.slice/B.service/memory.current:123428864
> /sys/fs/cgroup/test.slice/test-A.slice/C.service/memory.current:93495296
> /sys/fs/cgroup/test.slice/test-A.slice/D.service/memory.current:4702208
> /sys/fs/cgroup/test.slice/test-A.slice/E.service/memory.current:155648
(I'm running that on 5.6.0-rc2 + first two patches of your series.)
That's IMO closer to the my simulation (1.16:0.84)
than the example prediction (1.3:0.6)
> It's just to illustrate the pressure weight, not to reflect each
> factor that can influence the equilibrium.
But it's good to have some idea about the equilibrium when configuring
the values.
> I think it still has value to gain understanding of how it works, no?
Alas, the example confused me so that I had to write the simulation to
get grasp of it :-)
And even when running actual code now, I'd say the values in the
original example are only one of the equlibriums but definitely not
reachable from the stated initial conditions.
> > > @@ -6272,12 +6262,63 @@ struct cgroup_subsys memory_cgrp_subsys = {
> > > * for next usage. This part is intentionally racy, but it's ok,
> > > * as memory.low is a best-effort mechanism.
> > Although it's a different issue but since this updates the docs I'm
> > mentioning it -- we treat memory.min the same, i.e. it's subject to the
> > same race, however, it's not meant to be best effort. I didn't look into
> > outcomes of potential misaccounting but the comment seems to miss impact
> > on memory.min protection.
>
> Yeah I think we can delete that bit.
Erm, which part?
Make the racy behavior undocumented or that it applies both memory.low
and memory.min?
> I believe we cleared this up in the parallel thread, but just in case:
> reclaim can happen due to a memory.max set lower in the
> tree. memory.low propagation is always relative from the reclaim
> scope, not the system-wide root cgroup.
Clear now.
Michal
Download attachment "run.sh" of type "application/x-sh" (1257 bytes)
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists