lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CzfjegzGmou_MtPGLYnWzGuC5RExYs4f=mVhq0sD6j5Sg@mail.gmail.com>
Date:   Fri, 8 Apr 2022 07:58:15 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH v2 0/5] KVM: X86: Scaling Guest OS Critical Sections with boosting

ping,
On Fri, 1 Apr 2022 at 16:10, Wanpeng Li <kernellwp@...il.com> wrote:
>
> The missing semantic gap that occurs when a guest OS is preempted
> when executing its own critical section, this leads to degradation
> of application scalability. We try to bridge this semantic gap in
> some ways, by passing guest preempt_count to the host and checking
> guest irq disable state, the hypervisor now knows whether guest
> OSes are running in the critical section, the hypervisor yield-on-spin
> heuristics can be more smart this time to boost the vCPU candidate
> who is in the critical section to mitigate this preemption problem,
> in addition, it is more likely to be a potential lock holder.
>
> Testing on 96 HT 2 socket Xeon CLX server, with 96 vCPUs VM 100GB RAM,
> one VM running benchmark, the other(none-2) VMs running cpu-bound
> workloads, There is no performance regression for other benchmarks
> like Unixbench etc.
>
> 1VM
>             vanilla    optimized    improved
>
> hackbench -l 50000
>               28         21.45        30.5%
> ebizzy -M
>              12189       12354        1.4%
> dbench
>              712 MB/sec  722 MB/sec   1.4%
>
> 2VM:
>             vanilla    optimized    improved
>
> hackbench -l 10000
>               29.4        26          13%
> ebizzy -M
>              3834        4033          5%
> dbench
>            42.3 MB/sec  44.1 MB/sec   4.3%
>
> 3VM:
>             vanilla    optimized    improved
>
> hackbench -l 10000
>               47         35.46        33%
> ebizzy -M
>              3828        4031         5%
> dbench
>            30.5 MB/sec  31.16 MB/sec  2.3%
>
> v1 -> v2:
>  * add more comments to irq disable state
>  * renaming irq_disabled to last_guest_irq_disabled
>  * renaming, inverting the return, and also return a bool for kvm_vcpu_non_preemptable
>
> Wanpeng Li (5):
>   KVM: X86: Add MSR_KVM_PREEMPT_COUNT support
>   KVM: X86: Add last guest interrupt disable state support
>   KVM: X86: Boost vCPU which is in critical section
>   x86/kvm: Add MSR_KVM_PREEMPT_COUNT guest support
>   KVM: X86: Expose PREEMT_COUNT CPUID feature bit to guest
>
>  Documentation/virt/kvm/cpuid.rst     |  3 ++
>  arch/x86/include/asm/kvm_host.h      |  8 ++++
>  arch/x86/include/uapi/asm/kvm_para.h |  2 +
>  arch/x86/kernel/kvm.c                | 10 +++++
>  arch/x86/kvm/cpuid.c                 |  3 +-
>  arch/x86/kvm/x86.c                   | 60 ++++++++++++++++++++++++++++
>  include/linux/kvm_host.h             |  1 +
>  virt/kvm/kvm_main.c                  |  7 ++++
>  8 files changed, 93 insertions(+), 1 deletion(-)
>
> --
> 2.25.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ