lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YH2wnl05UBqVhcHr@google.com>
Date:   Mon, 19 Apr 2021 16:32:30 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Wanpeng Li <kernellwp@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH] KVM: Boost vCPU candidiate in user mode which is
 delivering interrupt

On Mon, Apr 19, 2021, Wanpeng Li wrote:
> On Sat, 17 Apr 2021 at 21:09, Paolo Bonzini <pbonzini@...hat.com> wrote:
> >
> > On 16/04/21 05:08, Wanpeng Li wrote:
> > > From: Wanpeng Li <wanpengli@...cent.com>
> > >
> > > Both lock holder vCPU and IPI receiver that has halted are condidate for
> > > boost. However, the PLE handler was originally designed to deal with the
> > > lock holder preemption problem. The Intel PLE occurs when the spinlock
> > > waiter is in kernel mode. This assumption doesn't hold for IPI receiver,
> > > they can be in either kernel or user mode. the vCPU candidate in user mode
> > > will not be boosted even if they should respond to IPIs. Some benchmarks
> > > like pbzip2, swaptions etc do the TLB shootdown in kernel mode and most
> > > of the time they are running in user mode. It can lead to a large number
> > > of continuous PLE events because the IPI sender causes PLE events
> > > repeatedly until the receiver is scheduled while the receiver is not
> > > candidate for a boost.
> > >
> > > This patch boosts the vCPU candidiate in user mode which is delivery
> > > interrupt. We can observe the speed of pbzip2 improves 10% in 96 vCPUs
> > > VM in over-subscribe scenario (The host machine is 2 socket, 48 cores,
> > > 96 HTs Intel CLX box). There is no performance regression for other
> > > benchmarks like Unixbench spawn (most of the time contend read/write
> > > lock in kernel mode), ebizzy (most of the time contend read/write sem
> > > and TLB shoodtdown in kernel mode).
> > >
> > > +bool kvm_arch_interrupt_delivery(struct kvm_vcpu *vcpu)
> > > +{
> > > +     if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu))
> > > +             return true;
> > > +
> > > +     return false;
> > > +}
> >
> > Can you reuse vcpu_dy_runnable instead of this new function?
> 
> I have some concerns. For x86 arch, vcpu_dy_runnable() will add extra
> vCPU candidates by KVM_REQ_EVENT

Is bringing in KVM_REQ_EVENT a bad thing though?  I don't see how using apicv is
special in this case.  apicv is more precise and so there will be fewer false
positives, but it's still just a guess on KVM's part since the interrupt could
be for something completely unrelated.

If false positives are a big concern, what about adding another pass to the loop
and only yielding to usermode vCPUs with interrupts in the second full pass?
I.e. give vCPUs that are already in kernel mode priority, and only yield to
handle an interrupt if there are no vCPUs in kernel mode.

kvm_arch_dy_runnable() pulls in pv_unhalted, which seems like a good thing.

> and async pf(which has already opportunistically made the guest do other stuff).

Any reason not to use kvm_arch_dy_runnable() directly?

> For other arches, kvm_arch_dy_runnale() is equal to kvm_arch_vcpu_runnable()
> except powerpc which has too many events and is not conservative. In general,
> vcpu_dy_runnable() will loose the conditions and add more vCPU candidates.
> 
>     Wanpeng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ