lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Jul 2018 09:52:52 -0400
From:   Waiman Long <longman@...hat.com>
To:     Davidlohr Bueso <dave@...olabs.net>,
        Wanpeng Li <kernellwp@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krcmar <rkrcmar@...hat.com>,
        Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Juergen Gross <jgross@...e.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        the arch/x86 maintainers <x86@...nel.org>,
        xen-devel <xen-devel@...ts.xenproject.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU

On 07/23/2018 12:42 AM, Davidlohr Bueso wrote:
> On Mon, 23 Jul 2018, Wanpeng Li wrote:
>
>> On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@...hat.com> wrote:
>>>
>>> On 07/19/2018 05:54 PM, Davidlohr Bueso wrote:
>>> > On Thu, 19 Jul 2018, Waiman Long wrote:
>>> >
>>> >> On a VM with only 1 vCPU, the locking fast paths will always be
>>> >> successful. In this case, there is no need to use the the PV
>>> qspinlock
>>> >> code which has higher overhead on the unlock side than the native
>>> >> qspinlock code.
>>> >>
>>> >> The xen_pvspin veriable is also turned off in this 1 vCPU case to
>
> s/veriable
>  variable
>
>>> >> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu()
>>> >> which is run after xen_init_spinlocks().
>>> >
>>> > Wouldn't kvm also want this?
>>> >
>>> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>>> > index a37bda38d205..95aceb692010 100644
>>> > --- a/arch/x86/kernel/kvm.c
>>> > +++ b/arch/x86/kernel/kvm.c
>>> > @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void)
>>> > static void __init kvm_smp_prepare_cpus(unsigned int max_cpus)
>>> > {
>>> >     native_smp_prepare_cpus(max_cpus);
>>> > -    if (kvm_para_has_hint(KVM_HINTS_REALTIME))
>>> > +    if (num_possible_cpus() == 1 ||
>>> > +        kvm_para_has_hint(KVM_HINTS_REALTIME))
>>> >         static_branch_disable(&virt_spin_lock_key);
>>> > }
>>>
>>> That doesn't really matter as the slowpath will never get executed in
>>> the 1 vCPU case.
>
> How does this differ then from xen, then? I mean, same principle applies.

I am not saying this patch is wrong. I am just saying that this is not
necessary.

In the xen case, they have a single variable that controls if
pvqspinlock should be used and turn off all the knobs accordingly. There
is no such equivalent in kvm. We had talked about that in the past, but
didn't come to a conclusion. In the 1 vCPU case, the most important
thing is to not use the pvqspinlock unlock path which add unneeded
runtime overhead.

The others just have a slight boot time overhead. For me, they are
optional. So I don't bother to add code to explicit turn them off as the
result will be the same with or without them.

>
>>
>> So this is not needed in kvm tree?
>> https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02
>>
>
> Hmm I would think that my patch would be more appropiate as it
> actually does
> what the comment says.

The static key controls the behavior of the locking slowpath which will
not be executed at all. So it is essentially a no-op.

-Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ