lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55766510.7010108@linux.intel.com>
Date:	Tue, 09 Jun 2015 12:01:20 +0800
From:	Xiao Guangrong <guangrong.xiao@...ux.intel.com>
To:	Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org
CC:	rkrcmar@...hat.com, bdas@...hat.com
Subject: Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM
 address space



On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
> This is now very simple to do.  The only interesting part is a simple
> trick to find the right memslot in gfn_to_rmap, retrieving the address
> space from the spte role word.  The same trick is used in the auditing
> code.
>
> The comment on top of union kvm_mmu_page_role has been stale forever,

Fortunately, we have documented these fields in mmu.txt, please do it for
'smm' as well. :)

> so remove it.  Speaking of stale code, remove pad_for_nice_hex_output
> too: it was splitting the "access" bitfield across two bytes and thus
> had effectively turned into pad_for_ugly_hex_output.
>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
> 	v1->v2: new
>
>   arch/x86/include/asm/kvm_host.h | 26 +++++++++++++++-----------
>   arch/x86/kvm/mmu.c              | 15 ++++++++++++---
>   arch/x86/kvm/mmu_audit.c        | 10 +++++++---
>   arch/x86/kvm/x86.c              |  2 ++
>   4 files changed, 36 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5a5e13af6e03..47006683f2fe 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -184,23 +184,12 @@ struct kvm_mmu_memory_cache {
>   	void *objects[KVM_NR_MEM_OBJS];
>   };
>
> -/*
> - * kvm_mmu_page_role, below, is defined as:
> - *
> - *   bits 0:3 - total guest paging levels (2-4, or zero for real mode)
> - *   bits 4:7 - page table level for this shadow (1-4)
> - *   bits 8:9 - page table quadrant for 2-level guests
> - *   bit   16 - direct mapping of virtual to physical mapping at gfn
> - *              used for real mode and two-dimensional paging
> - *   bits 17:19 - common access permissions for all ptes in this shadow page
> - */
>   union kvm_mmu_page_role {
>   	unsigned word;
>   	struct {
>   		unsigned level:4;
>   		unsigned cr4_pae:1;
>   		unsigned quadrant:2;
> -		unsigned pad_for_nice_hex_output:6;
>   		unsigned direct:1;
>   		unsigned access:3;
>   		unsigned invalid:1;
> @@ -208,6 +197,15 @@ union kvm_mmu_page_role {
>   		unsigned cr0_wp:1;
>   		unsigned smep_andnot_wp:1;
>   		unsigned smap_andnot_wp:1;
> +		unsigned :8;
> +
> +		/*
> +		 * This is left at the top of the word so that
> +		 * kvm_memslots_for_spte_role can extract it with a
> +		 * simple shift.  While there is room, give it a whole
> +		 * byte so it is also faster to load it from memory.
> +		 */
> +		unsigned smm:8;

I suspect if we really need this trick, smm is not the hottest filed in this
struct anyway.

Otherwise looks good to me:
Reviewed-by: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ