[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPcyv4gPDDHyev9cYHhrC4Z8rWRMZCS3xpXe+XWdgJ074renXw@mail.gmail.com>
Date: Wed, 19 Sep 2018 10:28:39 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Dave Jiang <dave.jiang@...el.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86/numa_emulation: fix parsing of numa_meminfo for
uniform numa emulation
On Wed, Sep 19, 2018 at 10:20 AM Dave Jiang <dave.jiang@...el.com> wrote:
>
> During fakenuma processing in numa_emulation(), pi gets passed in and
> processed as new fake numa nodes are being split out. Once the original
> memory region is proccessed, it gets removed from the pi by
> numa_remove_memblk_from() in emu_setup_memblk(). So entry 0 gets deleted
> and the rest of the entries get moved up. Therefore we should always pass
> in entry 0 for the next entry to process.
>
> Fixes: 1f6a2c6d9f121 ("x86/numa_emulation: Introduce uniform split
> capability")
>
> Cc: Dan Williams <dan.j.williams@...el.com>
> Signed-off-by: Dave Jiang <dave.jiang@...el.com>
Thanks Dave! I missed this behavior in my testing.
Reviewed-by: Dan Williams <dan.j.williams@...el.com>
> ---
> arch/x86/mm/numa_emulation.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c
> index b54d52a2d00a..a3ca8bf5afcb 100644
> --- a/arch/x86/mm/numa_emulation.c
> +++ b/arch/x86/mm/numa_emulation.c
> @@ -401,8 +401,8 @@ void __init numa_emulation(struct numa_meminfo *numa_meminfo, int numa_dist_cnt)
> ret = -1;
> for_each_node_mask(i, physnode_mask) {
We might put a comment here because the use of 0 is non-obvious on first glance.
> ret = split_nodes_size_interleave_uniform(&ei, &pi,
> - pi.blk[i].start, pi.blk[i].end, 0,
> - n, &pi.blk[i], nid);
> + pi.blk[0].start, pi.blk[0].end, 0,
> + n, &pi.blk[0], nid);
> if (ret < 0)
> break;
> if (ret < n) {
>
Powered by blists - more mailing lists