lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 06 May 2015 12:39:45 +0200
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	Bandan Das <bsd@...hat.com>
CC:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	rkrcmar@...hat.com, guangrong.xiao@...ux.intel.com,
	Yang Zhang <yang.z.zhang@...el.com>, wanpeng.li@...ux.intel.com
Subject: Re: [PATCH 09/13] KVM: x86: save/load state on SMM switch



On 05/05/2015 22:44, Bandan Das wrote:
> Paolo Bonzini <pbonzini@...hat.com> writes:
> 
>> +static inline bool guest_cpuid_has_longmode(struct kvm_vcpu *vcpu)
>> +{
>> +	struct kvm_cpuid_entry2 *best;
>> +
>> +	best = kvm_find_cpuid_entry(vcpu, 0x80000001, 0);
>> +	return best && (best->edx & bit(X86_FEATURE_LM));
>> +}
>> +
> 
> We could combine all guest_cpuid_has* functions into a single
> function guest_cpuid_has_feature(vcpu, feature) and avoid code duplication.
> Not relevant to this change - just a note (to self).
> 
>>  static inline bool guest_cpuid_has_osvw(struct kvm_vcpu *vcpu)
>>  {
>>  	struct kvm_cpuid_entry2 *best;
>> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
>> index f6b641207416..a49606871277 100644
>> --- a/arch/x86/kvm/emulate.c
>> +++ b/arch/x86/kvm/emulate.c
>> @@ -2262,12 +2262,253 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt)
>>  	return rc;
>>  }
>>  
>> +static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt)
>> +{
>> +	u32 eax, ebx, ecx, edx;
>> +
>> +	eax = 0x80000001;
>> +	ecx = 0;
>> +	ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx);
>> +	return edx & bit(X86_FEATURE_LM);
>> +}
>> +
>> +#define get_smstate(type, smbase, offset)				  \
>> +	({								  \
>> +	 type __val;							  \
>> +	 int r = ctxt->ops->read_std(ctxt, smbase + offset, &__val,       \
>> +				     sizeof(__val), NULL);		  \
>> +	 if (r != X86EMUL_CONTINUE)					  \
>> +		 return X86EMUL_UNHANDLEABLE;				  \
>> +	 __val;								  \
>> +	})
>> +
>> +static void rsm_set_desc_flags(struct desc_struct *desc, u16 flags)
>> +{
>> +	desc->g    = (flags >> 15) & 1;
>> +	desc->d    = (flags >> 14) & 1;
>> +	desc->l    = (flags >> 13) & 1;
>> +	desc->avl  = (flags >> 12) & 1;
>> +	desc->p    = (flags >> 7) & 1;
>> +	desc->dpl  = (flags >> 5) & 3;
>> +	desc->s    = (flags >> 4) & 1;
>> +	desc->type = flags & 15;
>> +}
>> +
>> +static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
>> +{
>> +	struct desc_struct desc;
>> +	int offset;
>> +	u16 selector;
>> +
>> +	selector = get_smstate(u32, smbase, 0x7fa8 + n * 4);
> Probably a good idea to use #defines for all the offsets here
> and elsewhere. 

Uff, those would be a lot of offsets.  It's just as easy to get the
#defines right, as it is to get them right in get_smstate...  >80
character lines would also mess everything up, I think.

> This return is redundant since rsm_load_seg_32 always returns
> success. Same for rsm_load_seg_64()

Note that the get_smstate macro can do an early return.

>>  static int em_rsm(struct x86_emulate_ctxt *ctxt)
>>  {
>> +	unsigned long cr0, cr4, efer;
>> +	u64 smbase;
>> +	int ret;
>> +
>> +	printk("rsm\n");
>>  	if ((ctxt->emul_flags & X86EMUL_SMM_MASK) == 0)
>>  		return emulate_ud(ctxt);
>>  
>> -	return X86EMUL_UNHANDLEABLE;
>> +	/*
>> +	 * Get back to real mode, to prepare a safe state in which
>> +	 * to load CR0/CR3/CR4/EFER.  Also this will ensure that
>> +	 * addresses passed to read_std/write_std are not virtual.
>> +	 */
> 
> I am trying to understand this. Aren't we are already in real mode
> here since we are in smm or am I missing something..

SMM starts in big real mode.  The SMI handler can go to protected mode
and even enable paging; as long as it's not 64-bit, it can execute an RSM.

I should check and disable CR4.PCIDE before CR0.PG.

>> +	cr0 = ctxt->ops->get_cr(ctxt, 0);
>> +	if (cr0 & X86_CR0_PE)
>> +		ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
>> +	cr4 = ctxt->ops->get_cr(ctxt, 4);
>> +	if (cr0 & X86_CR4_PAE)

Oops, cr4 here.

>> +	/* revision id */
>> +	put_smstate(u32, buf, 0x7efc, 0x00020064);
> Is the revision id (and  0x00020000 for process_smi*_32()) from the
> spec ? I can't seem to find them.

This revision id is in the AMD spec.  You can see that SeaBIOS checks
for it and the 32-bit one as well (which I cannot find anywhere).

SeaBIOS should also accept 0x30000 and 0x30064.  Sending a patch.

Paolo

> Bandan
> ...
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ