[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180821125156.GB29735@dhcp22.suse.cz>
Date: Tue, 21 Aug 2018 14:51:56 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Oscar Salvador <osalvador@...hadventures.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>, tglx@...utronix.de,
joe@...ches.com, arnd@...db.de, gregkh@...uxfoundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH] mm: Fix comment for NODEMASK_ALLOC
On Tue 21-08-18 14:30:24, Oscar Salvador wrote:
> On Tue, Aug 21, 2018 at 02:17:34PM +0200, Michal Hocko wrote:
> > We do have CONFIG_NODES_SHIFT=10 in our SLES kernels for quite some
> > time (around SLE11-SP3 AFAICS).
> >
> > Anyway, isn't NODES_ALLOC over engineered a bit? Does actually even do
> > larger than 1024 NUMA nodes? This would be 128B and from a quick glance
> > it seems that none of those functions are called in deep stacks. I
> > haven't gone through all of them but a patch which checks them all and
> > removes NODES_ALLOC would be quite nice IMHO.
>
> No, maximum we can get is 1024 NUMA nodes.
> I checked this when writing another patch [1], and since having gone
> through all archs Kconfigs, CONFIG_NODES_SHIFT=10 is the limit.
>
> NODEMASK_ALLOC gets only called from:
>
> - unregister_mem_sect_under_nodes() (not anymore after [1])
> - __nr_hugepages_store_common (This does not seem to have a deep stack, we could use a normal nodemask_t)
>
> But is also used for NODEMASK_SCRATCH (mainly used for mempolicy):
mempolicy code should be a shallow stack as well. Mostly the syscall
entry.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists