lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <597FF4CE.7050901@huawei.com>
Date:   Tue, 1 Aug 2017 11:26:06 +0800
From:   "Longpeng (Mike)" <longpeng2@...wei.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
CC:     David Hildenbrand <david@...hat.com>, <rkrcmar@...hat.com>,
        <agraf@...e.com>, <borntraeger@...ibm.com>, <cohuck@...hat.com>,
        <christoffer.dall@...aro.org>, <marc.zyngier@....com>,
        <james.hogan@...tec.com>, <kvm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <weidong.huang@...wei.com>,
        <arei.gonglei@...wei.com>, <wangxinxin.wang@...wei.com>,
        <longpeng.mike@...il.com>
Subject: Re: [RFC] KVM: optimize the kvm_vcpu_on_spin



On 2017/7/31 21:20, Paolo Bonzini wrote:

> On 31/07/2017 14:27, David Hildenbrand wrote:
>>> I'm not sure whether the operation of get the vcpu's priority-level is
>>> expensive on all architectures, so I record it in kvm_sched_out() for
>>> minimal the extra cycles cost in kvm_vcpu_on_spin().
>>>
>> as you only care for x86 right now either way, you can directly optimize
>> here for the good (here: x86) case (keeping changes and therefore
>> possible bugs minimal).
> 
> I agree with Cornelia that this is inconsistent, so you shouldn't update
> me->in_kernmode in kvm_vcpu_on_spin.  However, get_cpl requires
> vcpu_load on Intel x86, so Mike's patch is necessary (vmx_get_cpl ->
> vmx_read_guest_seg_ar -> vmcs_read32).
> 

Hi Paolo,

It seems that other architectures(e.g. arm/s390) needn't to cache the result,
but x86 need, so I need to move 'in_kernmode' into kvm_vcpu_arch and only add
this field to x86, right?

> Alternatively, we can add a new callback kvm_x86_ops->sched_out to x86
> KVM, and call vmx_get_cpl from the Intel implementation (vmx_sched_out).


In this approach, vmx_sched_out would only call vmx_get_cpl, isn't too
redundant, because we can just call kvm_x86_ops->get_cpl instead at the right place?

>  This will cache the result until the next sched_in, so that


'until the next sched_in' --> Do we need to clear the result in sched in ?

> kvm_vcpu_on_spin can use it.
> 
> Paolo
> 
> .
> 


-- 
Regards,
Longpeng(Mike)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ