[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5607eaea-5cc3-31a4-9685-1d7dd147f564@oracle.com>
Date: Wed, 7 Feb 2018 19:59:18 -0500
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: Prarit Bhargava <prarit@...hat.com>, linux-kernel@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Juergen Gross <jgross@...e.com>,
Dou Liyang <douly.fnst@...fujitsu.com>,
Kate Stewart <kstewart@...uxfoundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andy Lutomirski <luto@...nel.org>,
Andi Kleen <ak@...ux.intel.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
xen-devel@...ts.xenproject.org, simon@...isiblethingslab.com
Subject: Re: [PATCH] x86/xen: Calculate __max_logical_packages on PV domains
On 02/07/2018 06:49 PM, Prarit Bhargava wrote:
> The kernel panics on PV domains because native_smp_cpus_done() is
> only called for HVM domains.
>
> Calculate __max_logical_packages for PV domains.
>
> Fixes: b4c0a7326f5d ("x86/smpboot: Fix __max_logical_packages estimate")
> Signed-off-by: Prarit Bhargava <prarit@...hat.com>
> Tested-and-reported-by: Simon Gaiser <simon@...isiblethingslab.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: x86@...nel.org
> Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
> Cc: Juergen Gross <jgross@...e.com>
> Cc: Dou Liyang <douly.fnst@...fujitsu.com>
> Cc: Prarit Bhargava <prarit@...hat.com>
> Cc: Kate Stewart <kstewart@...uxfoundation.org>
> Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> Cc: Andy Lutomirski <luto@...nel.org>
> Cc: Andi Kleen <ak@...ux.intel.com>
> Cc: Vitaly Kuznetsov <vkuznets@...hat.com>
> Cc: xen-devel@...ts.xenproject.org
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@...cle.com>
(+ Simon)
> ---
> arch/x86/include/asm/smp.h | 1 +
> arch/x86/kernel/smpboot.c | 10 ++++++++--
> arch/x86/xen/smp.c | 2 ++
> 3 files changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
> index 461f53d27708..a4189762b266 100644
> --- a/arch/x86/include/asm/smp.h
> +++ b/arch/x86/include/asm/smp.h
> @@ -129,6 +129,7 @@ static inline void arch_send_call_function_ipi_mask(const struct cpumask *mask)
> void cpu_disable_common(void);
> void native_smp_prepare_boot_cpu(void);
> void native_smp_prepare_cpus(unsigned int max_cpus);
> +void calculate_max_logical_packages(void);
> void native_smp_cpus_done(unsigned int max_cpus);
> void common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
> int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 6f27facbaa9b..767573b7f2db 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1281,11 +1281,10 @@ void __init native_smp_prepare_boot_cpu(void)
> cpu_set_state_online(me);
> }
>
> -void __init native_smp_cpus_done(unsigned int max_cpus)
> +void __init calculate_max_logical_packages(void)
> {
> int ncpus;
>
> - pr_debug("Boot done\n");
> /*
> * Today neither Intel nor AMD support heterogenous systems so
> * extrapolate the boot cpu's data to all packages.
> @@ -1293,6 +1292,13 @@ void __init native_smp_cpus_done(unsigned int max_cpus)
> ncpus = cpu_data(0).booted_cores * topology_max_smt_threads();
> __max_logical_packages = DIV_ROUND_UP(nr_cpu_ids, ncpus);
> pr_info("Max logical packages: %u\n", __max_logical_packages);
> +}
> +
> +void __init native_smp_cpus_done(unsigned int max_cpus)
> +{
> + pr_debug("Boot done\n");
> +
> + calculate_max_logical_packages();
>
> if (x86_has_numa_in_package)
> set_sched_topology(x86_numa_in_package_topology);
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 77c959cf81e7..7a43b2ae19f1 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -122,6 +122,8 @@ void __init xen_smp_cpus_done(unsigned int max_cpus)
>
> if (xen_hvm_domain())
> native_smp_cpus_done(max_cpus);
> + else
> + calculate_max_logical_packages();
>
> if (xen_have_vcpu_info_placement)
> return;
>
Powered by blists - more mailing lists