[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r1dz4fxs.fsf@vitty.brq.redhat.com>
Date: Wed, 08 Sep 2021 10:41:51 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Eduardo Habkost <ehabkost@...hat.com>,
Juergen Gross <jgross@...e.com>
Cc: kvm@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org,
maz@...nel.org, Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v2 3/6] x86/kvm: introduce per cpu vcpu masks
Eduardo Habkost <ehabkost@...hat.com> writes:
> On Fri, Sep 03, 2021 at 03:08:04PM +0200, Juergen Gross wrote:
>> In order to support high vcpu numbers per guest don't use on stack
>> vcpu bitmasks. As all those currently used bitmasks are not used in
>> functions subject to recursion it is fairly easy to replace them with
>> percpu bitmasks.
>>
>> Disable preemption while such a bitmask is being used in order to
>> avoid double usage in case we'd switch cpus.
>>
>> Signed-off-by: Juergen Gross <jgross@...e.com>
>> ---
>> V2:
>> - use local_lock() instead of preempt_disable() (Paolo Bonzini)
>> ---
>> arch/x86/include/asm/kvm_host.h | 10 ++++++++++
>> arch/x86/kvm/hyperv.c | 25 ++++++++++++++++++-------
>> arch/x86/kvm/irq_comm.c | 9 +++++++--
>> arch/x86/kvm/x86.c | 22 +++++++++++++++++++++-
>> 4 files changed, 56 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 3513edee8e22..a809a9e4fa5c 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -15,6 +15,7 @@
>> #include <linux/cpumask.h>
>> #include <linux/irq_work.h>
>> #include <linux/irq.h>
>> +#include <linux/local_lock.h>
>>
>> #include <linux/kvm.h>
>> #include <linux/kvm_para.h>
>> @@ -1591,6 +1592,15 @@ extern bool kvm_has_bus_lock_exit;
>> /* maximum vcpu-id */
>> unsigned int kvm_max_vcpu_id(void);
>>
>> +/* per cpu vcpu bitmasks, protected by kvm_pcpu_mask_lock */
>> +DECLARE_PER_CPU(local_lock_t, kvm_pcpu_mask_lock);
>> +extern unsigned long __percpu *kvm_pcpu_vcpu_mask;
>> +#define KVM_VCPU_MASK_SZ \
>> + (sizeof(*kvm_pcpu_vcpu_mask) * BITS_TO_LONGS(KVM_MAX_VCPUS))
>> +extern u64 __percpu *kvm_hv_vp_bitmap;
>> +#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, 64)
>> +#define KVM_HV_VPMAP_SZ (sizeof(u64) * KVM_HV_MAX_SPARSE_VCPU_SET_BITS)
>
> I have just realized that the Hyper-V sparse bitmap format can
> support only up to 4096 CPUs, and the current implementation of
> sparse_set_to_vcpu_mask() won't even work correctly if
> KVM_MAX_VCPUS is larger than 4096.
Nice catch! Indeed, we can only encode 64 'banks' with 64 vCPUs each. We
need to enforce the limit somehow. As a big hammer, I can suggest to
fail kvm_hv_vcpu_init() and write to HV_X64_MSR_VP_INDEX for vCPUs above
4095 for now (I seriously doubt there's real need to run such big
Windows guests anywhere but who knows).
>
> This means vp_bitmap can't and will never be larger than 512
> bytes. Isn't a per-CPU variable for vp_bitmap overkill in this
> case?
Yes, it's OK to allocate 512 bytes on stack.
--
Vitaly
Powered by blists - more mailing lists