[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r1ynso8l.fsf@linux.ibm.com>
Date: Fri, 21 Feb 2020 14:02:18 -0600
From: Nathan Lynch <nathanl@...ux.ibm.com>
To: Scott Cheloha <cheloha@...ux.ibm.com>
Cc: Rick Lindsley <ricklind@...ux.vnet.ibm.com>,
David Hildenbrand <david@...hat.com>,
Michal Suchanek <msuchanek@...e.com>,
Michal Hocko <mhocko@...e.com>,
Nathan Fontenont <ndfont@...il.com>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
Michael Ellerman <mpe@...erman.id.au>
Subject: Re: [PATCH v2 1/2] powerpc/drmem: accelerate memory_add_physaddr_to_nid() with LMB xarray
Hi Scott,
Scott Cheloha <cheloha@...ux.ibm.com> writes:
> On PowerPC, memory_add_physaddr_to_nid() uses a linear search to find
> an LMB matching the given address. This scales very poorly when there
> are many LMBs. The poor scaling cripples drmem_init() during boot:
> lmb_set_nid(), which calls memory_add_physaddr_to_nid(), is called for
> each LMB.
>
> If we index each LMB in an xarray by its base address we can achieve
> O(log n) search during memory_add_physaddr_to_nid(), which scales much
> better.
Is O(log n) correct? I had thought lookup complexity depends on the
characteristics of the key.
Repeating some points from my comments on v1 below.
> +static int drmem_cache_lmb_for_lookup(struct drmem_lmb *lmb)
This is called only from __init functions, so it should be __init as well.
> +{
> + void *ret;
> +
> + ret = xa_store(&drmem_lmb_xa_base_addr, lmb->base_addr, lmb,
> + GFP_KERNEL);
> + if (xa_is_err(ret))
> + return xa_err(ret);
> +
> + return 0;
> +}
[...]
> @@ -364,6 +383,8 @@ static void __init init_drmem_v1_lmbs(const __be32 *prop)
>
> for_each_drmem_lmb(lmb) {
> read_drconf_v1_cell(lmb, &prop);
> + if (drmem_cache_lmb_for_lookup(lmb) != 0)
> + return;
> lmb_set_nid(lmb);
> }
> }
> @@ -411,6 +432,9 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop)
> lmb->aa_index = dr_cell.aa_index;
> lmb->flags = dr_cell.flags;
>
> + if (drmem_cache_lmb_for_lookup(lmb) != 0)
> + return;
> +
Failing to record an lmb in the cache shouldn't be cause for silently
aborting this initialization. Future lookups against the caches (should
the system even boot) may fail, but the drmem_lmbs will still be
initialized correctly.
I'd say just ignore (or perhaps log once) xa_store() failures as long as
this code only runs at boot.
> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
> index 3c7dec70cda0..0fd7963a991e 100644
> --- a/arch/powerpc/mm/numa.c
> +++ b/arch/powerpc/mm/numa.c
> @@ -958,27 +958,18 @@ early_param("topology_updates", early_topology_updates);
> static int hot_add_drconf_scn_to_nid(unsigned long scn_addr)
> {
> struct drmem_lmb *lmb;
> - unsigned long lmb_size;
> - int nid = NUMA_NO_NODE;
> -
> - lmb_size = drmem_lmb_size();
> -
> - for_each_drmem_lmb(lmb) {
> - /* skip this block if it is reserved or not assigned to
> - * this partition */
> - if ((lmb->flags & DRCONF_MEM_RESERVED)
> - || !(lmb->flags & DRCONF_MEM_ASSIGNED))
> - continue;
> -
> - if ((scn_addr < lmb->base_addr)
> - || (scn_addr >= (lmb->base_addr + lmb_size)))
> - continue;
>
> - nid = of_drconf_to_nid_single(lmb);
> - break;
> - }
> + lmb = drmem_find_lmb_by_base_addr(scn_addr);
It occurs to me that the old behavior here will succeed in looking up a
drmem_lmb if scn_addr is within it, while the new behavior is that the
given address must match the starting address of the drmem_lmb. I think
this is probably OK though?
> + if (lmb == NULL)
> + return NUMA_NO_NODE;
> +
> + /* can't use this block if it is reserved or not assigned to
> + * this partition */
> + if ((lmb->flags & DRCONF_MEM_RESERVED)
> + || !(lmb->flags & DRCONF_MEM_ASSIGNED))
> + return NUMA_NO_NODE;
>
> - return nid;
> + return of_drconf_to_nid_single(lmb);
> }
Otherwise I have no concerns about the changes to
hot_add_drconf_scn_to_nid.
Other than the minor points I've made here, this looks like a tidy
self-contained change to fix drmem initialization latency.
Powered by blists - more mailing lists