[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200318175529.GA6263@carbon.dhcp.thefacebook.com>
Date: Wed, 18 Mar 2020 10:55:29 -0700
From: Roman Gushchin <guro@...com>
To: Michal Hocko <mhocko@...nel.org>
CC: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<kernel-team@...com>, <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...riel.com>,
Andreas Schaufler <andreas.schaufler@....de>,
Mike Kravetz <mike.kravetz@...cle.com>,
Guido Günther <agx@...xcpu.org>,
Naresh Kamboju <naresh.kamboju@...aro.org>
Subject: Re: [PATCH] mm: hugetlb: fix hugetlb_cma_reserve() if CONFIG_NUMA
isn't set
On Wed, Mar 18, 2020 at 05:16:25PM +0100, Michal Hocko wrote:
> On Wed 18-03-20 08:34:24, Roman Gushchin wrote:
> > If CONFIG_NUMA isn't set, there is no need to ensure that
> > the hugetlb cma area belongs to a specific numa node.
> >
> > min/max_low_pfn can be used for limiting the maximum size
> > of the hugetlb_cma area.
> >
> > Also for_each_mem_pfn_range() is defined only if
> > CONFIG_HAVE_MEMBLOCK_NODE_MAP is set, and on arm (unlike most
> > other architectures) it depends on CONFIG_NUMA. This makes the
> > build fail if CONFIG_NUMA isn't set.
>
> CONFIG_HAVE_MEMBLOCK_NODE_MAP has popped out as a problem several times
> already. Is there any real reason we cannot make it unconditional?
> Essentially make the functionality always enabled and drop the config?
It depends on CONFIG_NUMA only on arm, and I really don't know
if there is a good justification for it. It not, that will be a much
simpler fix.
> The code below is ugly as hell. Just look at it. You have
> for_each_node_state without any ifdefery but the having ifdef
> CONFIG_NUMA. That just doesn't make any sense.
I don't think it makes no sense:
it tries to reserve a cma area on each node (need for_each_node_state()),
and it uses the for_each_mem_pfn_range() to get a min and max pfn
for each node. With !CONFIG_NUMA the first part is reduced to one
iteration and the second part is not required at all.
I agree that for_each_mem_pfn_range() here looks quite ugly, but I don't know
of a better way to get min/max pfns for a node so early in the boot process.
If somebody has any ideas here, I'll appreciate a lot.
I know Rik plans some further improvements here, so the goal for now
is to fix the build. If you think that enabling CONFIG_HAVE_MEMBLOCK_NODE_MAP
unconditionally is a way to go, I'm fine with it too.
Rik also posted a different fix for the build problem, but from what I've
seen it didn't fix it completely. I'm fine with either option here.
Thanks!
>
> > Signed-off-by: Roman Gushchin <guro@...com>
> > Reported-by: Andreas Schaufler <andreas.schaufler@....de>
> > ---
> > mm/hugetlb.c | 11 ++++++++---
> > 1 file changed, 8 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 7a20cae7c77a..a6161239abde 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -5439,16 +5439,21 @@ void __init hugetlb_cma_reserve(int order)
> >
> > reserved = 0;
> > for_each_node_state(nid, N_ONLINE) {
> > - unsigned long start_pfn, end_pfn;
> > unsigned long min_pfn = 0, max_pfn = 0;
> > - int res, i;
> > + int res;
> > +#ifdef CONFIG_NUMA
> > + unsigned long start_pfn, end_pfn;
> > + int i;
> >
> > for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
> > if (!min_pfn)
> > min_pfn = start_pfn;
> > max_pfn = end_pfn;
> > }
> > -
> > +#else
> > + min_pfn = min_low_pfn;
> > + max_pfn = max_low_pfn;
> > +#endif
> > size = max(per_node, hugetlb_cma_size - reserved);
> > size = round_up(size, PAGE_SIZE << order);
> >
> > --
> > 2.24.1
>
> --
> Michal Hocko
> SUSE Labs
>
Powered by blists - more mailing lists