[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y5wvAnroJHaWQbCV@dhcp22.suse.cz>
Date: Fri, 16 Dec 2022 09:40:34 +0100
From: Michal Hocko <mhocko@...e.com>
To: Wei Xu <weixugc@...gle.com>
Cc: Mina Almasry <almasrymina@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
"Huang, Ying" <ying.huang@...el.com>, Tejun Heo <tj@...nel.org>,
Zefan Li <lizefan.x@...edance.com>,
Jonathan Corbet <corbet@....net>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <songmuchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Yosry Ahmed <yosryahmed@...gle.com>, fvdl@...gle.com,
bagasdotme@...il.com, cgroups@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v3] mm: Add nodes= arg to memory.reclaim
On Thu 15-12-22 09:58:12, Wei Xu wrote:
> On Wed, Dec 14, 2022 at 2:23 AM Michal Hocko <mhocko@...e.com> wrote:
> >
> > On Tue 13-12-22 11:29:45, Mina Almasry wrote:
> > > On Tue, Dec 13, 2022 at 6:03 AM Michal Hocko <mhocko@...e.com> wrote:
> > > >
> > > > On Tue 13-12-22 14:30:40, Johannes Weiner wrote:
> > > > > On Tue, Dec 13, 2022 at 02:30:57PM +0800, Huang, Ying wrote:
> > > > [...]
> > > > > > After these discussion, I think the solution maybe use different
> > > > > > interfaces for "proactive demote" and "proactive reclaim". That is,
> > > > > > reconsider "memory.demote". In this way, we will always uncharge the
> > > > > > cgroup for "memory.reclaim". This avoid the possible confusion there.
> > > > > > And, because demotion is considered aging, we don't need to disable
> > > > > > demotion for "memory.reclaim", just don't count it.
> > > > >
> > > > > Hm, so in summary:
> > > > >
> > > > > 1) memory.reclaim would demote and reclaim like today, but it would
> > > > > change to only count reclaimed pages against the goal.
> > > > >
> > > > > 2) memory.demote would only demote.
> > > > >
> > >
> > > If the above 2 points are agreeable then yes, this sounds good to me
> > > and does address our use case.
> > >
> > > > > a) What if the demotion targets are full? Would it reclaim or fail?
> > > > >
> > >
> > > Wei will chime in if he disagrees, but I think we _require_ that it
> > > fails, not falls back to reclaim. The interface is asking for
> > > demotion, and is called memory.demote. For such an interface to fall
> > > back to reclaim would be very confusing to userspace and may trigger
> > > reclaim on a high priority job that we want to shield from proactive
> > > reclaim.
> >
> > But what should happen if the immediate demotion target is full but
> > lower tiers are still usable. Should the first one demote before
> > allowing to demote from the top tier?
>
> In that case, the demotion will fall back to the lower tiers. See
> node_get_allowed_targets() and establish_demotion_targets()..
I am not talking about an implicit behavior that we do not want to cast
into interface. If we want to allow a fine grained control over demotion
then the implementation shouldn't rely on the current behavior.
[...]
> > Is there any strong reason for that? We do not have any interface to
> > control NUMA balancing from userspace. Why cannot we use the interface
> > for that purpose?
>
> A demotion interface such as memory.demote will trigger the demotion
> code path in the kernel, which depends on multiple memory tiers.
Demotion is just a fancy name of a directed migration. There is no realy
dependency on the HW nor the technology.
> I think what you are getting is a more general page migration
> interface for memcg, which will need both source and target nodes as
> arguments. I think this can be a great idea. It should be able to
> support our demotion use cases as well.
yes.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists