[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201006093536.5f7ad9e1bc3e2fea2494c229@kernel.org>
Date: Tue, 6 Oct 2020 09:35:36 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Julien Grall <julien@....org>
Cc: Stefano Stabellini <sstabellini@...nel.org>,
xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
Alex Bennée <alex.bennee@...aro.org>,
takahiro.akashi@...aro.org
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
correctly
On Mon, 5 Oct 2020 19:18:47 +0100
Julien Grall <julien@....org> wrote:
> Hi Masami,
>
> On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > address conversion.
> >
> > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > assumes the given virtual address is in contiguous kernel memory
> > area, it can not convert the per-cpu memory if it is allocated on
> > vmalloc area (depends on CONFIG_SMP).
>
> Are you sure about this? I have a .config with CONFIG_SMP=y where the
> per-cpu region for CPU0 is allocated outside of vmalloc area.
>
> However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING
> was enabled.
OK, I've confirmed that this depends on CONFIG_NUMA_BALANCING instead
of CONFIG_SMP. I'll update the comment.
>
> [...]
>
> > Fixes: 250c9af3d831 ("arm/xen: Add support for 64KB page granularity")
>
> FWIW, I think the bug was already present before 250c9af3d831.
Hm, it seems commit 9a9ab3cc00dc ("xen/arm: SMP support") has introduced
the per-cpu code.
Thank you,
>
> > Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
> > ---
> > arch/arm/xen/enlighten.c | 2 +-
> > include/xen/arm/page.h | 3 +++
> > 2 files changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index e93145d72c26..a6ab3689b2f4 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
> > pr_info("Xen: initializing cpu%d\n", cpu);
> > vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
> >
> > - info.mfn = virt_to_gfn(vcpup);
> > + info.mfn = percpu_to_gfn(vcpup);
> > info.offset = xen_offset_in_page(vcpup);
> >
> > err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
> > diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
> > index 39df751d0dc4..ac1b65470563 100644
> > --- a/include/xen/arm/page.h
> > +++ b/include/xen/arm/page.h
> > @@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
> > })
> > #define gfn_to_virt(m) (__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
> >
> > +#define percpu_to_gfn(v) \
> > + (pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
> > +
> > /* Only used in PV code. But ARM guests are always HVM. */
> > static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
> > {
> >
>
> Cheers,
>
> --
> Julien Grall
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists