[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200429141945.GB5054@cmpxchg.org>
Date: Wed, 29 Apr 2020 10:19:45 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Yafang Shao <laoar.shao@...il.com>
Cc: Michal Hocko <mhocko@...nel.org>,
Chris Down <chris@...isdown.name>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>, Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm, memcg: Avoid stale protection values when cgroup
is above protection
On Wed, Apr 29, 2020 at 06:53:03PM +0800, Yafang Shao wrote:
> On Wed, Apr 29, 2020 at 6:15 PM Michal Hocko <mhocko@...nel.org> wrote:
> >
> > On Tue 28-04-20 19:26:47, Chris Down wrote:
> > > From: Yafang Shao <laoar.shao@...il.com>
> > >
> > > A cgroup can have both memory protection and a memory limit to isolate
> > > it from its siblings in both directions - for example, to prevent it
> > > from being shrunk below 2G under high pressure from outside, but also
> > > from growing beyond 4G under low pressure.
> > >
> > > Commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim")
> > > implemented proportional scan pressure so that multiple siblings in
> > > excess of their protection settings don't get reclaimed equally but
> > > instead in accordance to their unprotected portion.
> > >
> > > During limit reclaim, this proportionality shouldn't apply of course:
> > > there is no competition, all pressure is from within the cgroup and
> > > should be applied as such. Reclaim should operate at full efficiency.
> > >
> > > However, mem_cgroup_protected() never expected anybody to look at the
> > > effective protection values when it indicated that the cgroup is above
> > > its protection. As a result, a query during limit reclaim may return
> > > stale protection values that were calculated by a previous reclaim cycle
> > > in which the cgroup did have siblings.
> > >
> > > When this happens, reclaim is unnecessarily hesitant and potentially
> > > slow to meet the desired limit. In theory this could lead to premature
> > > OOM kills, although it's not obvious this has occurred in practice.
> >
> > Thanks this describes the underlying problem. I would be also explicit
> > that the issue should be visible only on tail memcgs which have both
> > max/high and protection configured and the effect depends on the
> > difference between the two (the smaller it is the largrger the effect).
> >
> > There is no mention about the fix. The patch resets effective values for
> > the reclaim root and I've had some concerns about that
> > http://lkml.kernel.org/r/20200424162103.GK11591@dhcp22.suse.cz.
> > Johannes has argued that other races are possible and I didn't get to
> > think about it thoroughly. But this patch is introducing a new
> > possibility of breaking protection.
>
> Agreed with Michal that more writes will cause more bugs.
> We should operate the volatile emin and elow as less as possible.
That's not a technical argument.
If races are a problem, it doesn't matter that they're rare. If
they're not a problem, it doesn't matter that they're frequent.
> > If we want to have a quick and
> > simple fix that would be easier to backport to older kernels then I
> > would feel much better if we simply workedaround the problem as
> > suggested earlier http://lkml.kernel.org/r/20200423061629.24185-1-laoar.shao@gmail.com
>
> +1
>
> This should be the right workaround to fix the current issue and it is
> worth to be backported to the stable kernel.
>From Documentation/process/stable-kernel-rules.rst:
- It must fix a real bug that bothers people (not a, "This could be a
problem..." type thing).
There hasn't been a mention of this affecting real workloads in the
submission history of this patch, so it doesn't qualify for -stable.
Powered by blists - more mailing lists