lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 9 Sep 2019 20:16:36 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Waiman Long <longman@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>, loobinliu@...cent.com,
        "# v3 . 10+" <stable@...r.kernel.org>
Subject: Re: [PATCH] Revert "locking/pvqspinlock: Don't wait if vCPU is preempted"

On Mon, 9 Sep 2019 at 19:06, Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 09/09/19 12:56, Waiman Long wrote:
> > On 9/9/19 2:40 AM, Wanpeng Li wrote:
> >> From: Wanpeng Li <wanpengli@...cent.com>
> >>
> >> This patch reverts commit 75437bb304b20 (locking/pvqspinlock: Don't wait if
> >> vCPU is preempted), we found great regression caused by this commit.
> >>
> >> Xeon Skylake box, 2 sockets, 40 cores, 80 threads, three VMs, each is 80 vCPUs.
> >> The score of ebizzy -M can reduce from 13000-14000 records/s to 1700-1800
> >> records/s with this commit.
> >>
> >>           Host                       Guest                score
> >>
> >> vanilla + w/o kvm optimizes     vanilla               1700-1800 records/s
> >> vanilla + w/o kvm optimizes     vanilla + revert      13000-14000 records/s
> >> vanilla + w/ kvm optimizes      vanilla               4500-5000 records/s
> >> vanilla + w/ kvm optimizes      vanilla + revert      14000-15500 records/s
> >>
> >> Exit from aggressive wait-early mechanism can result in yield premature and
> >> incur extra scheduling latency in over-subscribe scenario.
> >>
> >> kvm optimizes:
> >> [1] commit d73eb57b80b (KVM: Boost vCPUs that are delivering interrupts)
> >> [2] commit 266e85a5ec9 (KVM: X86: Boost queue head vCPU to mitigate lock waiter preemption)
> >>
> >> Tested-by: loobinliu@...cent.com
> >> Cc: Peter Zijlstra <peterz@...radead.org>
> >> Cc: Thomas Gleixner <tglx@...utronix.de>
> >> Cc: Ingo Molnar <mingo@...nel.org>
> >> Cc: Waiman Long <longman@...hat.com>
> >> Cc: Paolo Bonzini <pbonzini@...hat.com>
> >> Cc: Radim Krčmář <rkrcmar@...hat.com>
> >> Cc: loobinliu@...cent.com
> >> Cc: stable@...r.kernel.org
> >> Fixes: 75437bb304b20 (locking/pvqspinlock: Don't wait if vCPU is preempted)
> >> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> >> ---
> >>  kernel/locking/qspinlock_paravirt.h | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
> >> index 89bab07..e84d21a 100644
> >> --- a/kernel/locking/qspinlock_paravirt.h
> >> +++ b/kernel/locking/qspinlock_paravirt.h
> >> @@ -269,7 +269,7 @@ pv_wait_early(struct pv_node *prev, int loop)
> >>      if ((loop & PV_PREV_CHECK_MASK) != 0)
> >>              return false;
> >>
> >> -    return READ_ONCE(prev->state) != vcpu_running || vcpu_is_preempted(prev->cpu);
> >> +    return READ_ONCE(prev->state) != vcpu_running;
> >>  }
> >>
> >>  /*
> >
> > There are several possibilities for this performance regression:
> >
> > 1) Multiple vcpus calling vcpu_is_preempted() repeatedly may cause some
> > cacheline contention issue depending on how that callback is implemented.
>
> Unlikely, it is a single percpu read.
>
> > 2) KVM may set the preempt flag for a short period whenver an vmexit
> > happens even if a vmenter is executed shortly after. In this case, we
> > may want to use a more durable vcpu suspend flag that indicates the vcpu
> > won't get a real vcpu back for a longer period of time.
>
> It sets it for exits to userspace, but they shouldn't really happen on a
> properly-configured system.
>
> However, it's easy to test this theory:
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 2e302e977dac..feb6c75a7a88 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3368,26 +3368,28 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  {
>         int idx;
>
> -       if (vcpu->preempted)
> +       if (vcpu->preempted) {
>                 vcpu->arch.preempted_in_kernel = !kvm_x86_ops->get_cpl(vcpu);
>
> -       /*
> -        * Disable page faults because we're in atomic context here.
> -        * kvm_write_guest_offset_cached() would call might_fault()
> -        * that relies on pagefault_disable() to tell if there's a
> -        * bug. NOTE: the write to guest memory may not go through if
> -        * during postcopy live migration or if there's heavy guest
> -        * paging.
> -        */
> -       pagefault_disable();
> -       /*
> -        * kvm_memslots() will be called by
> -        * kvm_write_guest_offset_cached() so take the srcu lock.
> -        */
> -       idx = srcu_read_lock(&vcpu->kvm->srcu);
> -       kvm_steal_time_set_preempted(vcpu);
> -       srcu_read_unlock(&vcpu->kvm->srcu, idx);
> -       pagefault_enable();
> +               /*
> +                * Disable page faults because we're in atomic context here.
> +                * kvm_write_guest_offset_cached() would call might_fault()
> +                * that relies on pagefault_disable() to tell if there's a
> +                * bug. NOTE: the write to guest memory may not go through if
> +                * during postcopy live migration or if there's heavy guest
> +                * paging.
> +                */
> +               pagefault_disable();
> +               /*
> +                * kvm_memslots() will be called by
> +                * kvm_write_guest_offset_cached() so take the srcu lock.
> +                */
> +               idx = srcu_read_lock(&vcpu->kvm->srcu);
> +               kvm_steal_time_set_preempted(vcpu);
> +               srcu_read_unlock(&vcpu->kvm->srcu, idx);
> +               pagefault_enable();
> +       }
> +
>         kvm_x86_ops->vcpu_put(vcpu);
>         vcpu->arch.last_host_tsc = rdtsc();
>         /*
>
> Wanpeng, can you try?

Yes, there is no difference for the score.

Wanpeng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ