lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yrl2T632Vfv8QGPn@dhcp22.suse.cz>
Date:   Mon, 27 Jun 2022 11:20:15 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Yosry Ahmed <yosryahmed@...gle.com>
Cc:     Shakeel Butt <shakeelb@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Muchun Song <songmuchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>,
        Miaohe Lin <linmiaohe@...wei.com>, NeilBrown <neilb@...e.de>,
        Alistair Popple <apopple@...dia.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Peter Xu <peterx@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Cgroups <cgroups@...r.kernel.org>, Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: vmpressure: don't count userspace-induced reclaim as
 memory pressure

On Mon 27-06-22 01:39:46, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 1:25 AM Michal Hocko <mhocko@...e.com> wrote:
> >
> > On Thu 23-06-22 10:26:11, Yosry Ahmed wrote:
> > > On Thu, Jun 23, 2022 at 10:04 AM Michal Hocko <mhocko@...e.com> wrote:
> > > >
> > > > On Thu 23-06-22 09:42:43, Shakeel Butt wrote:
> > > > > On Thu, Jun 23, 2022 at 9:37 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > > >
> > > > > > On Thu 23-06-22 09:22:35, Yosry Ahmed wrote:
> > > > > > > On Thu, Jun 23, 2022 at 2:43 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > > > > >
> > > > > > > > On Thu 23-06-22 01:35:59, Yosry Ahmed wrote:
> > > > > > [...]
> > > > > > > > > In our internal version of memory.reclaim that we recently upstreamed,
> > > > > > > > > we do not account vmpressure during proactive reclaim (similar to how
> > > > > > > > > psi is handled upstream). We want to make sure this behavior also
> > > > > > > > > exists in the upstream version so that consolidating them does not
> > > > > > > > > break our users who rely on vmpressure and will start seeing increased
> > > > > > > > > pressure due to proactive reclaim.
> > > > > > > >
> > > > > > > > These are good reasons to have this patch in your tree. But why is this
> > > > > > > > patch benefitial for the upstream kernel? It clearly adds some code and
> > > > > > > > some special casing which will add a maintenance overhead.
> > > > > > >
> > > > > > > It is not just Google, any existing vmpressure users will start seeing
> > > > > > > false pressure notifications with memory.reclaim. The main goal of the
> > > > > > > patch is to make sure memory.reclaim does not break pre-existing users
> > > > > > > of vmpressure, and doing it in a way that is consistent with psi makes
> > > > > > > sense.
> > > > > >
> > > > > > memory.reclaim is v2 only feature which doesn't have vmpressure
> > > > > > interface. So I do not see how pre-existing users of the upstream kernel
> > > > > > can see any breakage.
> > > > > >
> > > > >
> > > > > Please note that vmpressure is still being used in v2 by the
> > > > > networking layer (see mem_cgroup_under_socket_pressure()) for
> > > > > detecting memory pressure.
> > > >
> > > > I have missed this. It is hidden quite good. I thought that v2 is
> > > > completely vmpressure free. I have to admit that the effect of
> > > > mem_cgroup_under_socket_pressure is not really clear to me. Not to
> > > > mention whether it should or shouldn't be triggered for the user
> > > > triggered memory reclaim. So this would really need some explanation.
> > >
> > > vmpressure was tied into socket pressure by 8e8ae645249b ("mm:
> > > memcontrol: hook up vmpressure to socket pressure"). A quick look at
> > > the commit log and the code suggests that this is used all over the
> > > socket and tcp code to throttles the memory consumption of the
> > > networking layer if we are under pressure.
> > >
> > > However, for proactive reclaim like memory.reclaim, the target is to
> > > probe the memcg for cold memory. Reclaiming such memory should not
> > > have a visible effect on the workload performance. I don't think that
> > > any network throttling side effects are correct here.
> >
> > Please describe the user visible effects of this change. IIUC this is
> > changing the vmpressure semantic for pre-existing users (v1 when setting
> > the hard limit for example) and it really should be explained why
> > this is good for them after those years. I do not see any actual bug
> > being described explicitly so please make sure this is all properly
> > documented.
> 
> In cgroup v1, user-induced reclaim that is caused by limit-setting (or
> memory.reclaim for systems that choose to expose it in cgroup v1) will
> no longer cause vmpressure notifications, which makes the vmpressure
> behavior consistent with the current psi behavior.

Yes it makes the behavior consistent with PSI. But is this what existing
users really want or need? This is a user visible long term behavior
change for a legacy interface and there should be a very good reason to
change that.

> In cgroup v2, user-induced reclaim (limit-setting, memory.reclaim, ..)
> would currently cause the networking layer to perceive the memcg as
> being under memory pressure, reducing memory consumption and possibly
> causing throttling. This patch makes the networking layer only
> perceive the memcg as being under pressure when the "pressure" is
> caused by increased memory usage, not limit-setting or proactive
> reclaim, which also makes the definition of memcg memory pressure
> consistent with psi today.

I do understand the argument about the pro-active reclaim.
memory.reclaim is a new interface and it a) makes sense to exclude it
from different memory pressure notification interfaces and b) there are
unlikely too many user applications depending on the exact behavior so
changes are still rather low on the risk scale.

> In short, the purpose of this patch is to unify the definition of
> memcg memory pressure across psi and vmpressure (which indirectly also
> defines the definition of memcg memory pressure for the networking
> layer). If this sounds good to you, I can add this explanation to the
> commit log, and possibly anywhere you see appropriate in the
> code/docs.

The consistency on its own sounds like a very weak argument to change a
long term behavior. I do not really see any serious arguments or
evaluation what kind of fallout this change can have on old applications
that are still sticking with v1.

After it has been made clear that the vmpressure is still used for the
pro-active reclaim in v2 I do agree that this is likely something we
want to have addressed. But I wouldn't touch v1 semantics as this
doesn't really buy much and it can potentially break existing users.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ