[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0330c9df-7ede-815b-0e6e-10fb883eda35@gmail.com>
Date: Mon, 19 Oct 2020 20:36:22 +0800
From: Haiwei Li <lihaiwei.kernel@...il.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: pbonzini@...hat.com, sean.j.christopherson@...el.com,
wanpengli@...cent.com, jmattson@...gle.com, joro@...tes.org,
Haiwei Li <lihaiwei@...cent.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] KVM: Check the allocation of pv cpu mask
On 20/10/19 19:23, Vitaly Kuznetsov wrote:
> lihaiwei.kernel@...il.com writes:
>
>> From: Haiwei Li <lihaiwei@...cent.com>
>>
>> check the allocation of per-cpu __pv_cpu_mask. Init
>> 'send_IPI_mask_allbutself' only when successful and check the allocation
>> of __pv_cpu_mask in 'kvm_flush_tlb_others'.
>>
>> Suggested-by: Vitaly Kuznetsov <vkuznets@...hat.com>
>> Signed-off-by: Haiwei Li <lihaiwei@...cent.com>
>> ---
>> v1 -> v2:
>> * add CONFIG_SMP for kvm_send_ipi_mask_allbutself to prevent build error
>> v2 -> v3:
>> * always check the allocation of __pv_cpu_mask in kvm_flush_tlb_others
>> v3 -> v4:
>> * mov kvm_setup_pv_ipi to kvm_alloc_cpumask and get rid of kvm_apic_init
>>
>> arch/x86/kernel/kvm.c | 53 +++++++++++++++++++++++++++++--------------
>> 1 file changed, 36 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 42c6e0deff9e..be28203cc098 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -547,16 +547,6 @@ static void kvm_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
>> __send_ipi_mask(local_mask, vector);
>> }
>>
>> -/*
>> - * Set the IPI entry points
>> - */
>> -static void kvm_setup_pv_ipi(void)
>> -{
>> - apic->send_IPI_mask = kvm_send_ipi_mask;
>> - apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
>> - pr_info("setup PV IPIs\n");
>> -}
>> -
>> static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)
>> {
>> int cpu;
>> @@ -619,6 +609,11 @@ static void kvm_flush_tlb_others(const struct cpumask *cpumask,
>> struct kvm_steal_time *src;
>> struct cpumask *flushmask = this_cpu_cpumask_var_ptr(__pv_cpu_mask);
>>
>> + if (unlikely(!flushmask)) {
>> + native_flush_tlb_others(cpumask, info);
>> + return;
>> + }
>> +
>> cpumask_copy(flushmask, cpumask);
>> /*
>> * We have to call flush only on online vCPUs. And
>> @@ -732,10 +727,6 @@ static uint32_t __init kvm_detect(void)
>>
>> static void __init kvm_apic_init(void)
>> {
>> -#if defined(CONFIG_SMP)
>> - if (pv_ipi_supported())
>> - kvm_setup_pv_ipi();
>> -#endif
>> }
>
> Do we still need the now-empty function?
It's not necessary. I will remove it.
>
>>
>> static void __init kvm_init_platform(void)
>> @@ -765,10 +756,18 @@ static __init int activate_jump_labels(void)
>> }
>> arch_initcall(activate_jump_labels);
>>
>> +static void kvm_free_cpumask(void)
>> +{
>> + unsigned int cpu;
>> +
>> + for_each_possible_cpu(cpu)
>> + free_cpumask_var(per_cpu(__pv_cpu_mask, cpu));
>> +}
>> +
>> static __init int kvm_alloc_cpumask(void)
>> {
>> int cpu;
>> - bool alloc = false;
>> + bool alloc = false, alloced = true;
>>
>> if (!kvm_para_available() || nopv)
>> return 0;
>> @@ -783,10 +782,30 @@ static __init int kvm_alloc_cpumask(void)
>>
>> if (alloc)
>> for_each_possible_cpu(cpu) {
>> - zalloc_cpumask_var_node(per_cpu_ptr(&__pv_cpu_mask, cpu),
>> - GFP_KERNEL, cpu_to_node(cpu));
>> + if (!zalloc_cpumask_var_node(
>> + per_cpu_ptr(&__pv_cpu_mask, cpu),
>> + GFP_KERNEL, cpu_to_node(cpu))) {
>> + alloced = false;
>> + break;
>> + }
>> }
>>
>> +#if defined(CONFIG_SMP)
>> + /* Set the IPI entry points */
>> + if (pv_ipi_supported()) {
>
> What if we define pv_ipi_supported() in !CONFIG_SMP case as 'false'?
>
> The code we have above:
>
> if (pv_tlb_flush_supported())
> alloc = true;
>
> #if defined(CONFIG_SMP)
> if (pv_ipi_supported())
> alloc = true;
> #endif
>
> if (alloc)
> ...
>
> will transform into 'if (pv_tlb_flush_supported() ||
> pv_ipi_supported())' and we'll get rid of 'alloc' variable.
>
> Also, we can probably get rid of this new 'alloced' variable and switch
> to checking if the cpumask for the last CPU in cpu_possible_mask is not
> NULL.
Get it. It's a good point. I will do it. Thanks for your patience and
kindness.
>
>> + apic->send_IPI_mask = kvm_send_ipi_mask;
>> + if (alloced)
>> + apic->send_IPI_mask_allbutself =
>> + kvm_send_ipi_mask_allbutself;
>> + pr_info("setup PV IPIs\n");
>
> I'd rather not set 'apic->send_IPI_mask = kvm_send_ipi_mask' in case we
> failed to alloc cpumask too. It is weird that in case of an allocation
> failure *some* IPIs will use the PV path and some won't. It's going to
> be a nightmare to debug.
Agree. And 'pv_ops.mmu.tlb_remove_table = tlb_remove_table' should not
be set either. What do you think? Thanks.
Haiwei Li
Powered by blists - more mailing lists