lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkpnvqDCKjf7cmZDcVROAkh_Vzu3HXRJgkZsqp+xVokRZA@mail.gmail.com>
Date:   Mon, 6 Dec 2021 10:26:32 -0800
From:   Yang Shi <shy828301@...il.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>,
        Nico Pache <npache@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Kirill Tkhai <ktkhai@...tuozzo.com>,
        Roman Gushchin <guro@...com>,
        Vladimir Davydov <vdavydov.dev@...il.com>, raquini@...hat.com
Subject: Re: [RFC PATCH 2/2] mm/vmscan.c: Prevent allocating shrinker_info on
 offlined nodes

On Mon, Dec 6, 2021 at 6:53 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Mon 06-12-21 15:30:37, Vlastimil Babka wrote:
> > On 12/6/21 15:21, Michal Hocko wrote:
> > > On Mon 06-12-21 15:08:10, David Hildenbrand wrote:
> > >>
> > >> >> But there might be more missing. Onlining a new zone will get more
> > >> >> expensive in setups with a lot of possible nodes (x86-64 shouldn't
> > >> >> really be an issue in that regard).
> > >> >
> > >> > Honestly, I am not really concerned by platforms with too many nodes
> > >> > without any memory. If they want to shoot their feet then that's their
> > >> > choice. We can optimize for those if they ever prove to be standar.
> > >> >
> > >> >> If we want stable backports, we'll want something simple upfront.
> > >> >
> > >> > For stable backports I would be fine by doing your NODE_DATA check in
> > >> > the allocator. In upstream I think we should be aiming for a more robust
> > >> > solution that is also easier to maintain further down the line. Even if
> > >> > that is an investment at this momemnt because the initialization code is
> > >> > a mess.
> > >> >
> > >>
> > >> Agreed. I would be curious *why* we decided to dynamically allocate the
> > >> pgdat. is this just a historical coincidence or was there real reason to
> > >> not allocate it for all possible nodes during boot?
> > >
> > > I don't know but if I was to guess the most likely explanation would be
> > > that the numa init code was in a similar order as now and it was easier
> > > to simply allocate a pgdat when a new one was onlined.
> > > 9af3c2dea3a3 ("[PATCH] pgdat allocation for new node add (call pgdat allocation)")
> > > doesn't really tell much.
> >
> > I don't know if that's true for pgdat specifically, but generally IMHO the
> > advantages of allocating during/after online instead for each possible is
> > - memory savings when some possible node is actually never online
> > - at least in some cases, the allocations can be local to the node in
> > question where the advantages is
> >   - faster access
> >   - less memory occupied on nodes that are earlier online, especially node 0
> >
> > So while the approach of allocate on boot for all possible nodes instead of
> > just online nodes has advantages of being generally safer and simpler (no
> > memory hotplug callbacks etc), we should also be careful not to overdo this
> > approach so we don't end up with Node 0 memory filled with structures used
> > for nodes 1-X that are just onlined later. I imagine that could be a problem
> > even for "sane" archs that don't have tons of possible, but offline nodes.
>
> Yes this can indeed turn out to be a problem as the memory allocations
> scales not only with numa nodes but memcgs as well. The later one being
> a more visible one.
>
> > Concretely, pgdat should probably be fine, but things like all shrinkers?
> > Maybe less so.
>
> Yeah, right. But for that purpose the concept of online_node is just
> misleading. You would need a check whether the node is populated with
> memory and implement hotplug notifiers.

Yes, the cons is memory waste. I think it is a known problem since
memcg has per node data (a.k.a. mem_cgroup_per_node_info) which holds
lruvec and shrinker infos. And the comment in the code of
alloc_mem_cgroup_per_node_info() does say:

"TODO: this routine can waste much memory for nodes which will never
be onlined. It's better to use memory hotplug callback function."

But IMHO actually the memory usage should be not that bad for memcg
heavy usecases since there should be not too many "never onlined"
nodes for such workloads?

>
> --
> Michal Hocko
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ