lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 1 Aug 2017 10:32:32 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     "Longpeng (Mike)" <longpeng2@...wei.com>
Cc:     David Hildenbrand <david@...hat.com>, rkrcmar@...hat.com,
        agraf@...e.com, borntraeger@...ibm.com, cohuck@...hat.com,
        christoffer.dall@...aro.org, marc.zyngier@....com,
        james.hogan@...tec.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, weidong.huang@...wei.com,
        arei.gonglei@...wei.com, wangxinxin.wang@...wei.com,
        longpeng.mike@...il.com
Subject: Re: [RFC] KVM: optimize the kvm_vcpu_on_spin

On 01/08/2017 05:26, Longpeng (Mike) wrote:
> 
> 
> On 2017/7/31 21:20, Paolo Bonzini wrote:
> 
>> On 31/07/2017 14:27, David Hildenbrand wrote:
>>>> I'm not sure whether the operation of get the vcpu's priority-level is
>>>> expensive on all architectures, so I record it in kvm_sched_out() for
>>>> minimal the extra cycles cost in kvm_vcpu_on_spin().
>>>>
>>> as you only care for x86 right now either way, you can directly optimize
>>> here for the good (here: x86) case (keeping changes and therefore
>>> possible bugs minimal).
>>
>> I agree with Cornelia that this is inconsistent, so you shouldn't update
>> me->in_kernmode in kvm_vcpu_on_spin.  However, get_cpl requires
>> vcpu_load on Intel x86, so Mike's patch is necessary (vmx_get_cpl ->
>> vmx_read_guest_seg_ar -> vmcs_read32).
>>
> 
> Hi Paolo,
> 
> It seems that other architectures(e.g. arm/s390) needn't to cache the result,
> but x86 need, so I need to move 'in_kernmode' into kvm_vcpu_arch and only add
> this field to x86, right?

That's another way to do it, and I like it.

>>  This will cache the result until the next sched_in, so that
> 
> 'until the next sched_in' --> Do we need to clear the result in sched in ?

No, thanks.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ