lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 06 May 2015 13:55:11 -0400
From:	Bandan Das <bsd@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	rkrcmar@...hat.com, guangrong.xiao@...ux.intel.com,
	Yang Zhang <yang.z.zhang@...el.com>, wanpeng.li@...ux.intel.com
Subject: Re: [PATCH 09/13] KVM: x86: save/load state on SMM switch

Paolo Bonzini <pbonzini@...hat.com> writes:
..
>>> +
>>> +	selector = get_smstate(u32, smbase, 0x7fa8 + n * 4);
>> Probably a good idea to use #defines for all the offsets here
>> and elsewhere. 
>
> Uff, those would be a lot of offsets.  It's just as easy to get the
> #defines right, as it is to get them right in get_smstate...  >80
> character lines would also mess everything up, I think.

Valid point. But I would say cryptic names will always be better then
hex offsets. It's a one time pain in the neck to get the defines
right.

I found this header -
ftp://ftp.xskernel.org/soft/linux-src/bochs-20080511/cpu/smm.h

Maybe we can pick it up ? I can sanity check the list and
make sure the offsets are correct if you prefer.

>> This return is redundant since rsm_load_seg_32 always returns
>> success. Same for rsm_load_seg_64()
>
> Note that the get_smstate macro can do an early return.
>
Oops! Sorry, right. I suggest changing get_smstate to get_smstate_iam_a_macro_not_a_function ;)

>>>  static int em_rsm(struct x86_emulate_ctxt *ctxt)
>>>  {
>>> +	unsigned long cr0, cr4, efer;
>>> +	u64 smbase;
>>> +	int ret;
>>> +
>>> +	printk("rsm\n");
>>>  	if ((ctxt->emul_flags & X86EMUL_SMM_MASK) == 0)
>>>  		return emulate_ud(ctxt);
>>>  
>>> -	return X86EMUL_UNHANDLEABLE;
>>> +	/*
>>> +	 * Get back to real mode, to prepare a safe state in which
>>> +	 * to load CR0/CR3/CR4/EFER.  Also this will ensure that
>>> +	 * addresses passed to read_std/write_std are not virtual.
>>> +	 */
>> 
>> I am trying to understand this. Aren't we are already in real mode
>> here since we are in smm or am I missing something..
>
> SMM starts in big real mode.  The SMI handler can go to protected mode
> and even enable paging; as long as it's not 64-bit, it can execute an RSM.

Ok. This part. rsm can be called from protected mode. Thanks for the
explanation.

> I should check and disable CR4.PCIDE before CR0.PG.
>
Actually, 4.10.1 says that "The processor ensures that CR4.PCIDE can
only be 1 in IA32e mode".  At this point it should already be 0.

Bandan

>>> +	cr0 = ctxt->ops->get_cr(ctxt, 0);
>>> +	if (cr0 & X86_CR0_PE)
>>> +		ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
>>> +	cr4 = ctxt->ops->get_cr(ctxt, 4);
>>> +	if (cr0 & X86_CR4_PAE)
>
> Oops, cr4 here.
>
>>> +	/* revision id */
>>> +	put_smstate(u32, buf, 0x7efc, 0x00020064);
>> Is the revision id (and  0x00020000 for process_smi*_32()) from the
>> spec ? I can't seem to find them.
>
> This revision id is in the AMD spec.  You can see that SeaBIOS checks
> for it and the 32-bit one as well (which I cannot find anywhere).
>
> SeaBIOS should also accept 0x30000 and 0x30064.  Sending a patch.
>
> Paolo
>
>> Bandan
>> ...
>> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ