[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkY6Jg3+Pb95B0YAvHdgYKvKv_D8Tbc62hX5wzCmWUF6xQ@mail.gmail.com>
Date: Fri, 24 Jun 2022 15:13:55 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Michal Hocko <mhocko@...e.com>, Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <songmuchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>, NeilBrown <neilb@...e.de>,
Alistair Popple <apopple@...dia.com>,
Peter Xu <peterx@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>, Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: vmpressure: don't count userspace-induced reclaim as
memory pressure
On Fri, Jun 24, 2022 at 3:10 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Thu, Jun 23, 2022 at 10:26 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> >
> > On Thu, Jun 23, 2022 at 10:04 AM Michal Hocko <mhocko@...e.com> wrote:
> > >
> > > On Thu 23-06-22 09:42:43, Shakeel Butt wrote:
> > > > On Thu, Jun 23, 2022 at 9:37 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > >
> > > > > On Thu 23-06-22 09:22:35, Yosry Ahmed wrote:
> > > > > > On Thu, Jun 23, 2022 at 2:43 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > > > >
> > > > > > > On Thu 23-06-22 01:35:59, Yosry Ahmed wrote:
> > > > > [...]
> > > > > > > > In our internal version of memory.reclaim that we recently upstreamed,
> > > > > > > > we do not account vmpressure during proactive reclaim (similar to how
> > > > > > > > psi is handled upstream). We want to make sure this behavior also
> > > > > > > > exists in the upstream version so that consolidating them does not
> > > > > > > > break our users who rely on vmpressure and will start seeing increased
> > > > > > > > pressure due to proactive reclaim.
> > > > > > >
> > > > > > > These are good reasons to have this patch in your tree. But why is this
> > > > > > > patch benefitial for the upstream kernel? It clearly adds some code and
> > > > > > > some special casing which will add a maintenance overhead.
> > > > > >
> > > > > > It is not just Google, any existing vmpressure users will start seeing
> > > > > > false pressure notifications with memory.reclaim. The main goal of the
> > > > > > patch is to make sure memory.reclaim does not break pre-existing users
> > > > > > of vmpressure, and doing it in a way that is consistent with psi makes
> > > > > > sense.
> > > > >
> > > > > memory.reclaim is v2 only feature which doesn't have vmpressure
> > > > > interface. So I do not see how pre-existing users of the upstream kernel
> > > > > can see any breakage.
> > > > >
> > > >
> > > > Please note that vmpressure is still being used in v2 by the
> > > > networking layer (see mem_cgroup_under_socket_pressure()) for
> > > > detecting memory pressure.
> > >
> > > I have missed this. It is hidden quite good. I thought that v2 is
> > > completely vmpressure free. I have to admit that the effect of
> > > mem_cgroup_under_socket_pressure is not really clear to me. Not to
> > > mention whether it should or shouldn't be triggered for the user
> > > triggered memory reclaim. So this would really need some explanation.
> >
> > vmpressure was tied into socket pressure by 8e8ae645249b ("mm:
> > memcontrol: hook up vmpressure to socket pressure"). A quick look at
> > the commit log and the code suggests that this is used all over the
> > socket and tcp code to throttles the memory consumption of the
> > networking layer if we are under pressure.
> >
> > However, for proactive reclaim like memory.reclaim, the target is to
> > probe the memcg for cold memory. Reclaiming such memory should not
> > have a visible effect on the workload performance. I don't think that
> > any network throttling side effects are correct here.
>
> IIUC, this change is fixing two mechanisms during userspace-induced
> memory pressure:
> 1. psi accounting, which I think is not controversial and makes sense to me;
> 2. vmpressure signal, which is a "kinda" obsolete interface and might
> be viewed as controversial.
> I would suggest splitting the patch into two, first to fix psi
> accounting and second to fix vmpressure signal. This way the first one
> (probably the bigger of the two) can be reviewed and accepted easily
> while debates continue on the second one.
This change should be NOP for psi. psi was already fixed by
e22c6ed90aa9 ("mm: memcontrol: don't count limit-setting reclaim
as memory pressure") by Johannes a while ago. This patch does the same
for vmpressure, but in a different way, as the same approach of
e22c6ed90aa9 cannot be used.
The changes you are seeing in this patch for psi are basically
reverting e22c6ed90aa9 and using the newly introduced flag that
handles vmpressure to handle psi as well, to avoid having two separate
ways to address accounting memory pressure during userspace-induced
reclaim.
>
> >
> > >
> > > > Though IMO we should deprecate vmpressure altogether.
> > >
> > > Yes it should be really limited to v1. But as I've said the effect on
> > > mem_cgroup_under_socket_pressure is not really clear to me. It really
> > > seems the v2 support has been introduced deliberately.
> > >
> > > --
> > > Michal Hocko
> > > SUSE Labs
Powered by blists - more mailing lists