lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 15 Dec 2022 09:58:12 -0800
From:   Wei Xu <weixugc@...gle.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Mina Almasry <almasrymina@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>,
        "Huang, Ying" <ying.huang@...el.com>, Tejun Heo <tj@...nel.org>,
        Zefan Li <lizefan.x@...edance.com>,
        Jonathan Corbet <corbet@....net>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Shakeel Butt <shakeelb@...gle.com>,
        Muchun Song <songmuchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        Yosry Ahmed <yosryahmed@...gle.com>, fvdl@...gle.com,
        bagasdotme@...il.com, cgroups@...r.kernel.org,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH v3] mm: Add nodes= arg to memory.reclaim

On Wed, Dec 14, 2022 at 2:23 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Tue 13-12-22 11:29:45, Mina Almasry wrote:
> > On Tue, Dec 13, 2022 at 6:03 AM Michal Hocko <mhocko@...e.com> wrote:
> > >
> > > On Tue 13-12-22 14:30:40, Johannes Weiner wrote:
> > > > On Tue, Dec 13, 2022 at 02:30:57PM +0800, Huang, Ying wrote:
> > > [...]
> > > > > After these discussion, I think the solution maybe use different
> > > > > interfaces for "proactive demote" and "proactive reclaim".  That is,
> > > > > reconsider "memory.demote".  In this way, we will always uncharge the
> > > > > cgroup for "memory.reclaim".  This avoid the possible confusion there.
> > > > > And, because demotion is considered aging, we don't need to disable
> > > > > demotion for "memory.reclaim", just don't count it.
> > > >
> > > > Hm, so in summary:
> > > >
> > > > 1) memory.reclaim would demote and reclaim like today, but it would
> > > >    change to only count reclaimed pages against the goal.
> > > >
> > > > 2) memory.demote would only demote.
> > > >
> >
> > If the above 2 points are agreeable then yes, this sounds good to me
> > and does address our use case.
> >
> > > >    a) What if the demotion targets are full? Would it reclaim or fail?
> > > >
> >
> > Wei will chime in if he disagrees, but I think we _require_ that it
> > fails, not falls back to reclaim. The interface is asking for
> > demotion, and is called memory.demote. For such an interface to fall
> > back to reclaim would be very confusing to userspace and may trigger
> > reclaim on a high priority job that we want to shield from proactive
> > reclaim.
>
> But what should happen if the immediate demotion target is full but
> lower tiers are still usable. Should the first one demote before
> allowing to demote from the top tier?

In that case, the demotion will fall back to the lower tiers.  See
node_get_allowed_targets() and establish_demotion_targets()..

> > > > 3) Would memory.reclaim and memory.demote still need nodemasks?
> >
> > memory.demote will need a nodemask, for sure. Today the nodemask would
> > be useful if there is a specific node in the top tier that is
> > overloaded and we want to reduce the pressure by demoting. In the
> > future there will be N tiers and the nodemask says which tier to
> > demote from.
>
> OK, so what is the exact semantic of the node mask. Does it control
> where to demote from or to or both?

The nodemask argument proposed here is to control where to demote
from.   We can follow the existing kernel demotion order to select
where to demote to.  If the need to control the demotion destination
arises, another argument can be added.

> > I don't think memory.reclaim would need a nodemask anymore? At least I
> > no longer see the use for it for us.
> >
> > > >    Would
> > > >    they return -EINVAL if a) memory.reclaim gets passed only toptier
> > > >    nodes or b) memory.demote gets passed any lasttier nodes?
> > >
> >
> > Honestly it would be great if memory.reclaim can force reclaim from a
> > top tier nodes. It breaks the aginig pipeline, yes, but if the user is
> > specifically asking for that because they decided in their usecase
> > it's a good idea then the kernel should comply IMO. Not a strict
> > requirement for us. Wei will chime in if he disagrees.
>
> That would require a nodemask to say which nodes to reclaim, no? The
> default behavior should be in line with what standard memory reclaim
> does. If the demotion is a part of that process so should be
> memory.reclaim part of it. If we want to have a finer control then a
> nodemask is really a must and then the nodemaks should constrain both
> agining and reclaim.

Given that the original meaning of memory.reclaim is to free up
memory, I agree that when a nodemask is provided, the kernel should be
allowed to do both aging/demotion and reclaim.  Whether to allow
reclaim from top-tier nodes is a kernel implementation choice.  The
userspace should not depend on that.

Also, because the expectation of memory.reclaim is to free up the
specified amount of bytes, I think if a page is demoted, but both its
source and target nodes are still in the given nodemask, such a
demoted page should not be counted towards the requested bytes of
memory.reclaim. In the case that no nodemask is given (i.e. to free up
memory from all nodes), the demoted pages should never be counted in
the return value of try_to_free_mem_cgroup_pages().

Meanwhile, I'd argue that even though we want to unify demotion and
reclaim, there are still significant differences between them.
Demotion moves pages between two memory tiers, while reclaim can move
pages to a much slower tier, e.g. disk-based files or swap.  Both the
page movement latencies and the reaccess latencies can be
significantly different for demotion/reclaim.  So it is useful for the
userspace to be able to request demotion without reclaim.  A separate
interface, e.g. memory.demote, seems like a good choice for that.

> > memory.demote returning -EINVAL for lasttier nodes makes sense to me.
> >
> > > I would also add
> > > 4) Do we want to allow to control the demotion path (e.g. which node to
> > >    demote from and to) and how to achieve that?
> >
> > We care deeply about specifying which node to demote _from_. That
> > would be some node that is approaching pressure and we're looking for
> > proactive saving from. So far I haven't seen any reason to control
> > which nodes to demote _to_. The kernel deciding that based on the
> > aging pipeline and the node distances sounds good to me. Obviously
> > someone else may find that useful.
>
> Please keep in mind that the interface should be really prepared for
> future extensions so try to abstract from your immediate usecases.
>
> > > 5) Is the demotion api restricted to multi-tier systems or any numa
> > >    configuration allowed as well?
> > >
> >
> > demotion will of course not work on single tiered systems. The
> > interface may return some failure on such systems or not be available
> > at all.
>
> Is there any strong reason for that? We do not have any interface to
> control NUMA balancing from userspace. Why cannot we use the interface
> for that purpose?

A demotion interface such as memory.demote will trigger the demotion
code path in the kernel, which depends on multiple memory tiers.

I think what you are getting is a more general page migration
interface for memcg, which will need both source and target nodes as
arguments. I think this can be a great idea.  It should be able to
support our demotion use cases as well.

> --
> Michal Hocko
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ