lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181001155413.GA2314@rkaganb.sw.ru>
Date:   Mon, 1 Oct 2018 15:54:26 +0000
From:   Roman Kagan <rkagan@...tuozzo.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
CC:     Vitaly Kuznetsov <vkuznets@...hat.com>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        Radim Krčmář <rkrcmar@...hat.com>,
        "K. Y. Srinivasan" <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        "Michael Kelley (EOSG)" <Michael.H.Kelley@...rosoft.com>,
        Mohammed Gamal <mmorsy@...hat.com>,
        Cathy Avery <cavery@...hat.com>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 4/7] KVM: x86: hyperv: keep track of mismatched VP
 indexes

On Mon, Oct 01, 2018 at 05:48:54PM +0200, Paolo Bonzini wrote:
> On 27/09/2018 11:17, Vitaly Kuznetsov wrote:
> > Roman Kagan <rkagan@...tuozzo.com> writes:
> > 
> >> On Wed, Sep 26, 2018 at 07:02:56PM +0200, Vitaly Kuznetsov wrote:
> >>> In most common cases VP index of a vcpu matches its vcpu index. Userspace
> >>> is, however, free to set any mapping it wishes and we need to account for
> >>> that when we need to find a vCPU with a particular VP index. To keep search
> >>> algorithms optimal in both cases introduce 'num_mismatched_vp_indexes'
> >>> counter showing how many vCPUs with mismatching VP index we have. In case
> >>> the counter is zero we can assume vp_index == vcpu_idx.
> >>>
> >>> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> >>> ---
> >>>  arch/x86/include/asm/kvm_host.h |  3 +++
> >>>  arch/x86/kvm/hyperv.c           | 26 +++++++++++++++++++++++---
> >>>  2 files changed, 26 insertions(+), 3 deletions(-)
> >>>
> >>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> >>> index 09b2e3e2cf1b..711f79f1b5e6 100644
> >>> --- a/arch/x86/include/asm/kvm_host.h
> >>> +++ b/arch/x86/include/asm/kvm_host.h
> >>> @@ -781,6 +781,9 @@ struct kvm_hv {
> >>>  	u64 hv_reenlightenment_control;
> >>>  	u64 hv_tsc_emulation_control;
> >>>  	u64 hv_tsc_emulation_status;
> >>> +
> >>> +	/* How many vCPUs have VP index != vCPU index */
> >>> +	atomic_t num_mismatched_vp_indexes;
> >>>  };
> >>>  
> >>>  enum kvm_irqchip_mode {
> >>> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> >>> index c8764faf783b..6a19c8e3c432 100644
> >>> --- a/arch/x86/kvm/hyperv.c
> >>> +++ b/arch/x86/kvm/hyperv.c
> >>> @@ -1045,11 +1045,31 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
> >>>  	struct kvm_vcpu_hv *hv_vcpu = &vcpu->arch.hyperv;
> >>>  
> >>>  	switch (msr) {
> >>> -	case HV_X64_MSR_VP_INDEX:
> >>> -		if (!host || (u32)data >= KVM_MAX_VCPUS)
> >>> +	case HV_X64_MSR_VP_INDEX: {
> >>> +		struct kvm_hv *hv = &vcpu->kvm->arch.hyperv;
> >>> +		int vcpu_idx = kvm_vcpu_get_idx(vcpu);
> >>> +		u32 new_vp_index = (u32)data;
> >>> +
> >>> +		if (!host || new_vp_index >= KVM_MAX_VCPUS)
> >>>  			return 1;
> >>> -		hv_vcpu->vp_index = (u32)data;
> >>> +
> >>> +		if (new_vp_index == hv_vcpu->vp_index)
> >>> +			return 0;
> >>> +
> >>> +		/*
> >>> +		 * VP index is changing, increment num_mismatched_vp_indexes in
> >>> +		 * case it was equal to vcpu_idx before; on the other hand, if
> >>> +		 * the new VP index matches vcpu_idx num_mismatched_vp_indexes
> >>> +		 * needs to be decremented.
> >>
> >> It may be worth mentioning that the initial balance is provided by
> >> kvm_hv_vcpu_postcreate setting vp_index = vcpu_idx.
> >>
> > 
> > Of course, yes, will update the comment in case I'll be re-submitting.
> 
> 	/*
> 	 * VP index is initialized to hv_vcpu->vp_index by
> 	 * kvm_hv_vcpu_postcreate so they initially match.  Now the
> 	 * VP index is changing, adjust num_mismatched_vp_indexes if
> 	 * it now matches or no longer matches vcpu_idx.
> 	 */
> 
> ?

To my taste - perfect :)

Roman.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ