lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CxVXsQCmEpxNJSifmQJk5cqoSifFq+huHJE1s7a-=0iXw@mail.gmail.com>
Date:   Tue, 10 Sep 2019 13:56:42 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Waiman Long <longman@...hat.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>, loobinliu@...cent.com,
        "# v3 . 10+" <stable@...r.kernel.org>
Subject: Re: [PATCH] Revert "locking/pvqspinlock: Don't wait if vCPU is preempted"

On Mon, 9 Sep 2019 at 18:56, Waiman Long <longman@...hat.com> wrote:
>
> On 9/9/19 2:40 AM, Wanpeng Li wrote:
> > From: Wanpeng Li <wanpengli@...cent.com>
> >
> > This patch reverts commit 75437bb304b20 (locking/pvqspinlock: Don't wait if
> > vCPU is preempted), we found great regression caused by this commit.
> >
> > Xeon Skylake box, 2 sockets, 40 cores, 80 threads, three VMs, each is 80 vCPUs.
> > The score of ebizzy -M can reduce from 13000-14000 records/s to 1700-1800
> > records/s with this commit.
> >
> >           Host                       Guest                score
> >
> > vanilla + w/o kvm optimizes     vanilla               1700-1800 records/s
> > vanilla + w/o kvm optimizes     vanilla + revert      13000-14000 records/s
> > vanilla + w/ kvm optimizes      vanilla               4500-5000 records/s
> > vanilla + w/ kvm optimizes      vanilla + revert      14000-15500 records/s
> >
> > Exit from aggressive wait-early mechanism can result in yield premature and
> > incur extra scheduling latency in over-subscribe scenario.
> >
> > kvm optimizes:
> > [1] commit d73eb57b80b (KVM: Boost vCPUs that are delivering interrupts)
> > [2] commit 266e85a5ec9 (KVM: X86: Boost queue head vCPU to mitigate lock waiter preemption)
> >
> > Tested-by: loobinliu@...cent.com
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Ingo Molnar <mingo@...nel.org>
> > Cc: Waiman Long <longman@...hat.com>
> > Cc: Paolo Bonzini <pbonzini@...hat.com>
> > Cc: Radim Krčmář <rkrcmar@...hat.com>
> > Cc: loobinliu@...cent.com
> > Cc: stable@...r.kernel.org
> > Fixes: 75437bb304b20 (locking/pvqspinlock: Don't wait if vCPU is preempted)
> > Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> > ---
> >  kernel/locking/qspinlock_paravirt.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
> > index 89bab07..e84d21a 100644
> > --- a/kernel/locking/qspinlock_paravirt.h
> > +++ b/kernel/locking/qspinlock_paravirt.h
> > @@ -269,7 +269,7 @@ pv_wait_early(struct pv_node *prev, int loop)
> >       if ((loop & PV_PREV_CHECK_MASK) != 0)
> >               return false;
> >
> > -     return READ_ONCE(prev->state) != vcpu_running || vcpu_is_preempted(prev->cpu);
> > +     return READ_ONCE(prev->state) != vcpu_running;
> >  }
> >
> >  /*
>
> There are several possibilities for this performance regression:
>
> 1) Multiple vcpus calling vcpu_is_preempted() repeatedly may cause some
> cacheline contention issue depending on how that callback is implemented.
>
> 2) KVM may set the preempt flag for a short period whenver an vmexit
> happens even if a vmenter is executed shortly after. In this case, we
> may want to use a more durable vcpu suspend flag that indicates the vcpu
> won't get a real vcpu back for a longer period of time.
>
> Perhaps you can add a lock event counter to count the number of
> wait_early events caused by vcpu_is_preempted() being true to see if it
> really cause a lot more wait_early than without the vcpu_is_preempted()
> call.

pv_wait_again:1:179
pv_wait_early:1:189429
pv_wait_head:1:263
pv_wait_node:1:189429
pv_vcpu_is_preempted:1:45588
=========sleep 5============
pv_wait_again:1:181
pv_wait_early:1:202574
pv_wait_head:1:267
pv_wait_node:1:202590
pv_vcpu_is_preempted:1:46336

The sampling period is 5s, 6% of wait_early events caused by
vcpu_is_preempted() being true.

                Wanpeng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ