lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Feb 2019 17:41:52 +0800
From:   Yang Weijiang <weijiang.yang@...el.com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>
Cc:     pbonzini@...hat.com, rkrcmar@...hat.com, jmattson@...gle.com,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org, mst@...hat.com,
        yu-cheng.yu@...el.com, weijiang.yang@...el.com
Subject: Re: [PATCH v3 8/8] KVM:X86: Add user-space read/write interface for
 CET MSRs.

On Thu, Feb 28, 2019 at 08:32:49AM -0800, Sean Christopherson wrote:
> On Mon, Feb 25, 2019 at 09:27:16PM +0800, Yang Weijiang wrote:
> > The Guest MSRs are stored in fpu storage area, they are
> > operated by XSAVES/XRSTORS, so use kvm_load_guest_fpu
> > to restore them is a convenient way to let KVM access
> > them. After finish operation, need to restore Host MSR
> > contents by kvm_put_guest_fpu.
> > 
> > Signed-off-by: Yang Weijiang <weijiang.yang@...el.com>
> > ---
> >  arch/x86/kvm/x86.c | 46 +++++++++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 43 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index a0f8b71b2132..a4bdbef3a712 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -75,6 +75,8 @@
> >  
> >  #define MAX_IO_MSRS 256
> >  #define KVM_MAX_MCE_BANKS 32
> > +#define MAX_GUEST_CET_MSRS 5
> > +
> >  u64 __read_mostly kvm_mce_cap_supported = MCG_CTL_P | MCG_SER_P;
> >  EXPORT_SYMBOL_GPL(kvm_mce_cap_supported);
> >  
> > @@ -214,6 +216,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
> >  u64 __read_mostly host_xcr0;
> >  
> >  static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt);
> > +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
> > +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
> >  
> >  static inline void kvm_async_pf_hash_reset(struct kvm_vcpu *vcpu)
> >  {
> > @@ -2889,21 +2893,57 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  }
> >  EXPORT_SYMBOL_GPL(kvm_get_msr_common);
> >  
> > +static int do_cet_msrs(struct kvm_vcpu *vcpu, int entry_num,
> > +		       struct kvm_msr_entry *entries, bool read)
> > +{
> > +	int i = entry_num;
> > +	int j = MAX_GUEST_CET_MSRS;
> > +	bool has_cet;
> > +
> > +	has_cet = guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) |
> > +		  guest_cpuid_has(vcpu, X86_FEATURE_IBT);
> > +	/*
> > +	 * Guest CET MSRs are saved by XSAVES, so need to restore
> > +	 * them first, then read out or update the contents and
> > +	 * restore Host ones.
> > +	 */
> > +	if (has_cet) {
> > +		kvm_load_guest_fpu(vcpu);
> > +
> > +		if (read) {
> > +			for (j = 0; j < MAX_GUEST_CET_MSRS; j++, i++)
> > +				rdmsrl(entries[i].index, entries[i].data);
> > +		} else {
> > +			for (j = 0; j < MAX_GUEST_CET_MSRS; j++, i++)
> > +				wrmsrl(entries[i].index, entries[i].data);
> > +		}
> > +
> > +		kvm_put_guest_fpu(vcpu);
> > +	}
> > +	return j;
> > +}
> >  /*
> >   * Read or write a bunch of msrs. All parameters are kernel addresses.
> >   *
> >   * @return number of msrs set successfully.
> >   */
> >  static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs,
> > -		    struct kvm_msr_entry *entries,
> > +		    struct kvm_msr_entry *entries, bool read,
> >  		    int (*do_msr)(struct kvm_vcpu *vcpu,
> >  				  unsigned index, u64 *data))
> >  {
> >  	int i;
> >  
> > -	for (i = 0; i < msrs->nmsrs; ++i)
> > +	for (i = 0; i < msrs->nmsrs; ++i) {
> > +		/* If it comes to CET related MSRs, read them together. */
> > +		if (entries[i].index == MSR_IA32_U_CET) {
> > +			i += do_cet_msrs(vcpu, i, entries, read) - 1;
> 
> This wrong, not to mention horribly buggy.  The correct way to handle
> MSRs that are switched via the VMCS is to read/write the VMCS in
> vmx_{get,set}_msr() as needed, e.g. vmcs_writel(GUEST_GS_BASE, data).
>
The code is barely for migration purpose since they're passed through
to guest, I have no intent to expose them just like normal MSRs.
I double checked the spec.:
MSR_IA32_U_CET                  0x6a0
MSR_IA32_PL0_SSP                0x6a4
MSR_IA32_PL3_SSP                0x6a7
should come from xsave area.

MSR_IA32_S_CET                  0x6a2
MSR_IA32_INTR_SSP_TABL          0x6a8
should come from VMCS guest fields.

for the former, they're stored in guest fpu area, need
to restore them with xrstors temporarily before read, while the later is stored in
VMCS guest fields, I can read them out via vmcs_read() directly.

I'd like to read them out as a whole(all or nothing) due to cost induced 
by xsaves/xrstors, in Qemu I put these MSRs as sequential fields.

what do you think of it?


> > +			continue;
> > +		}
> > +
> >  		if (do_msr(vcpu, entries[i].index, &entries[i].data))
> >  			break;
> > +	}
> >  
> >  	return i;
> >  }
> > @@ -2938,7 +2978,7 @@ static int msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs __user *user_msrs,
> >  		goto out;
> >  	}
> >  
> > -	r = n = __msr_io(vcpu, &msrs, entries, do_msr);
> > +	r = n = __msr_io(vcpu, &msrs, entries, !!writeback, do_msr);
> >  	if (r < 0)
> >  		goto out_free;
> >  
> > -- 
> > 2.17.1
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ