[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CxQAsKaGQwfvS6P9f7U1PwTrBF2UJom4Wnqj0nybMAzZw@mail.gmail.com>
Date: Wed, 18 Jul 2018 11:00:39 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krcmar <rkrcmar@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH v3 0/6] KVM: X86: Implement PV IPIs support
Gentle ping, hope this series can catch up with the next merge window. :)
On Tue, 3 Jul 2018 at 14:21, Wanpeng Li <kernellwp@...il.com> wrote:
>
> Using hypercall to send IPIs by one vmexit instead of one by one for
> xAPIC/x2APIC physical mode and one vmexit per-cluster for x2APIC cluster
> mode. Intel guest can enter x2apic cluster mode when interrupt remmaping
> is enabled in qemu, however, latest AMD EPYC still just supports xapic
> mode which can get great improvement by PV IPIs. This patchset supports
> PV IPIs for maximal 128 vCPUs VM, it is big enough for cloud environment
> currently, supporting more vCPUs needs to introduce more complex logic,
> in the future this might be extended if needed.
>
> Hardware: Xeon Skylake 2.5GHz, 2 sockets, 40 cores, 80 threads, the VM
> is 80 vCPUs, IPI microbenchmark(https://lkml.org/lkml/2017/12/19/141):
>
> x2apic cluster mode, vanilla
>
> Dry-run: 0, 2392199 ns
> Self-IPI: 6907514, 15027589 ns
> Normal IPI: 223910476, 251301666 ns
> Broadcast IPI: 0, 9282161150 ns
> Broadcast lock: 0, 8812934104 ns
>
> x2apic cluster mode, pv-ipi
>
> Dry-run: 0, 2449341 ns
> Self-IPI: 6720360, 15028732 ns
> Normal IPI: 228643307, 255708477 ns
> Broadcast IPI: 0, 7572293590 ns => 22% performance boost
> Broadcast lock: 0, 8316124651 ns
>
> x2apic physical mode, vanilla
>
> Dry-run: 0, 3135933 ns
> Self-IPI: 8572670, 17901757 ns
> Normal IPI: 226444334, 255421709 ns
> Broadcast IPI: 0, 19845070887 ns
> Broadcast lock: 0, 19827383656 ns
>
> x2apic physical mode, pv-ipi
>
> Dry-run: 0, 2446381 ns
> Self-IPI: 6788217, 15021056 ns
> Normal IPI: 219454441, 249583458 ns
> Broadcast IPI: 0, 7806540019 ns => 154% performance boost
> Broadcast lock: 0, 9143618799 ns
>
> v2 -> v3:
> * rename ipi_mask_done to irq_restore_exit, __send_ipi_mask return int
> instead of bool
> * fix build errors reported by 0day
> * split patches, nothing change
>
> v1 -> v2:
> * sparse apic id > 128, or any other errors, fallback to original apic hooks
> * have two bitmask arguments so that one hypercall handles 128 vCPUs
> * fix KVM_FEATURE_PV_SEND_IPI doc
> * document hypercall
> * fix NMI selftest fails
> * fix build errors reported by 0day
>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Radim Krčmář <rkrcmar@...hat.com>
> Cc: Vitaly Kuznetsov <vkuznets@...hat.com>
>
> Wanpeng Li (6):
> KVM: X86: Add kvm hypervisor init time platform setup callback
> KVM: X86: Implement PV IPIs in linux guest
> KVM: X86: Fallback to original apic hooks when bad happens
> KVM: X86: Implement PV IPIs send hypercall
> KVM: X86: Add NMI support to PV IPIs
> KVM: X86: Expose PV_SEND_IPI CPUID feature bit to guest
>
> Documentation/virtual/kvm/cpuid.txt | 4 ++
> Documentation/virtual/kvm/hypercalls.txt | 6 ++
> arch/x86/include/uapi/asm/kvm_para.h | 1 +
> arch/x86/kernel/kvm.c | 101 +++++++++++++++++++++++++++++++
> arch/x86/kvm/cpuid.c | 3 +-
> arch/x86/kvm/x86.c | 42 +++++++++++++
> include/uapi/linux/kvm_para.h | 1 +
> 7 files changed, 157 insertions(+), 1 deletion(-)
>
> --
> 2.7.4
>
Powered by blists - more mailing lists