[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240719165143.0000002e@Huawei.com>
Date: Fri, 19 Jul 2024 16:51:43 +0100
From: Jonathan Cameron <Jonathan.Cameron@...wei.com>
To: David Hildenbrand <david@...hat.com>
CC: Mike Rapoport <rppt@...nel.org>, <linux-kernel@...r.kernel.org>,
"Alexander Gordeev" <agordeev@...ux.ibm.com>, Andreas Larsson
<andreas@...sler.com>, Andrew Morton <akpm@...ux-foundation.org>, Arnd
Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>, Catalin Marinas
<catalin.marinas@....com>, Christophe Leroy <christophe.leroy@...roup.eu>,
Dan Williams <dan.j.williams@...el.com>, Dave Hansen
<dave.hansen@...ux.intel.com>, "David S. Miller" <davem@...emloft.net>, Greg
Kroah-Hartman <gregkh@...uxfoundation.org>, Heiko Carstens
<hca@...ux.ibm.com>, "Huacai Chen" <chenhuacai@...nel.org>, Ingo Molnar
<mingo@...hat.com>, Jiaxun Yang <jiaxun.yang@...goat.com>, John Paul Adrian
Glaubitz <glaubitz@...sik.fu-berlin.de>, Michael Ellerman
<mpe@...erman.id.au>, Palmer Dabbelt <palmer@...belt.com>, "Rafael J.
Wysocki" <rafael@...nel.org>, Rob Herring <robh@...nel.org>, Thomas
Bogendoerfer <tsbogend@...ha.franken.de>, Thomas Gleixner
<tglx@...utronix.de>, "Vasily Gorbik" <gor@...ux.ibm.com>, Will Deacon
<will@...nel.org>, <linux-arm-kernel@...ts.infradead.org>,
<loongarch@...ts.linux.dev>, <linux-mips@...r.kernel.org>,
<linuxppc-dev@...ts.ozlabs.org>, <linux-riscv@...ts.infradead.org>,
<linux-s390@...r.kernel.org>, <linux-sh@...r.kernel.org>,
<sparclinux@...r.kernel.org>, <linux-acpi@...r.kernel.org>,
<linux-cxl@...r.kernel.org>, <nvdimm@...ts.linux.dev>,
<devicetree@...r.kernel.org>, <linux-arch@...r.kernel.org>,
<linux-mm@...ck.org>, <x86@...nel.org>
Subject: Re: [PATCH 05/17] arch, mm: pull out allocation of NODE_DATA to
generic code
On Fri, 19 Jul 2024 17:07:35 +0200
David Hildenbrand <david@...hat.com> wrote:
> >>> - * Allocate node data. Try node-local memory and then any node.
> >>> - * Never allocate in DMA zone.
> >>> - */
> >>> - nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
> >>> - if (!nd_pa) {
> >>> - pr_err("Cannot find %zu bytes in any node (initial node: %d)\n",
> >>> - nd_size, nid);
> >>> - return;
> >>> - }
> >>> - nd = __va(nd_pa);
> >>> -
> >>> - /* report and initialize */
> >>> - printk(KERN_INFO "NODE_DATA(%d) allocated [mem %#010Lx-%#010Lx]\n", nid,
> >>> - nd_pa, nd_pa + nd_size - 1);
> >>> - tnid = early_pfn_to_nid(nd_pa >> PAGE_SHIFT);
> >>> - if (tnid != nid)
> >>> - printk(KERN_INFO " NODE_DATA(%d) on node %d\n", nid, tnid);
> >>> -
> >>> - node_data[nid] = nd;
> >>> - memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
> >>> -
> >>> - node_set_online(nid);
> >>> -}
> >>> -
> >>> /**
> >>> * numa_cleanup_meminfo - Cleanup a numa_meminfo
> >>> * @mi: numa_meminfo to clean up
> >>> @@ -571,6 +538,7 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
> >>> continue;
> >>> alloc_node_data(nid);
> >>> + node_set_online(nid);
> >>> }
> >>
> >> I can spot that we only remove a single node_set_online() call from x86.
> >>
> >> What about all the other architectures? Will there be any change in behavior
> >> for them? Or do we simply set the nodes online later once more?
> >
> > On x86 node_set_online() was a part of alloc_node_data() and I moved it
> > outside so it's called right after alloc_node_data(). On other
> > architectures the allocation didn't include that call, so there should be
> > no difference there.
>
> But won't their arch code try setting the nodes online at a later stage?
>
> And I think, some architectures only set nodes online conditionally
> (see most other node_set_online() calls).
>
> Sorry if I'm confused here, but with now unconditional node_set_online(), won't
> we change the behavior of other architectures?
This is moving x86 code to x86 code, not a generic location
so how would that affect anyone else? Their onlining should be same as
before.
The node onlining difference are a pain (I recall that fun from adding
generic initiators) as different ordering on x86 and arm64 at least.
Jonathan
>
Powered by blists - more mailing lists