[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160912143555.26lxdu3lv3o5hjp7@pd.tnic>
Date:   Mon, 12 Sep 2016 16:35:55 +0200
From:   Borislav Petkov <bp@...en8.de>
To:     Tom Lendacky <thomas.lendacky@....com>
Cc:     linux-arch@...r.kernel.org, linux-efi@...r.kernel.org,
        kvm@...r.kernel.org, linux-doc@...r.kernel.org, x86@...nel.org,
        linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com,
        linux-mm@...ck.org, iommu@...ts.linux-foundation.org,
        Radim Krčmář <rkrcmar@...hat.com>,
        Arnd Bergmann <arnd@...db.de>,
        Jonathan Corbet <corbet@....net>,
        Matt Fleming <matt@...eblueprint.co.uk>,
        Joerg Roedel <joro@...tes.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Ingo Molnar <mingo@...hat.com>,
        Andy Lutomirski <luto@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Alexander Potapenko <glider@...gle.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Dmitry Vyukov <dvyukov@...gle.com>
Subject: Re: [RFC PATCH v2 18/20] x86/kvm: Enable Secure Memory Encryption of
 nested page tables
On Mon, Aug 22, 2016 at 05:38:49PM -0500, Tom Lendacky wrote:
> Update the KVM support to include the memory encryption mask when creating
> and using nested page tables.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
> ---
>  arch/x86/include/asm/kvm_host.h |    3 ++-
>  arch/x86/kvm/mmu.c              |    8 ++++++--
>  arch/x86/kvm/vmx.c              |    3 ++-
>  arch/x86/kvm/x86.c              |    3 ++-
>  4 files changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 33ae3a4..c51c1cb 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1039,7 +1039,8 @@ void kvm_mmu_setup(struct kvm_vcpu *vcpu);
>  void kvm_mmu_init_vm(struct kvm *kvm);
>  void kvm_mmu_uninit_vm(struct kvm *kvm);
>  void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
> -		u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask);
> +		u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask,
> +		u64 me_mask);
Why do you need a separate mask?
arch/x86/kvm/mmu.c::set_spte() ORs in shadow_present_mask
unconditionally. So you can simply do:
	kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK,
			      PT_DIRTY_MASK, PT64_NX_MASK, 0,
			      PT_PRESENT_MASK | sme_me_mask);
and have this change much simpler.
>  void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);
>  void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 3d4cc8cc..a7040f4 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -122,7 +122,7 @@ module_param(dbg, bool, 0644);
>  					    * PT32_LEVEL_BITS))) - 1))
>  
>  #define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \
> -			| shadow_x_mask | shadow_nx_mask)
> +			| shadow_x_mask | shadow_nx_mask | shadow_me_mask)
This would be sme_me_mask, of course, like with the baremetal masks.
Or am I missing something?
-- 
Regards/Gruss,
    Boris.
ECO tip #101: Trim your mails when you reply.
Powered by blists - more mailing lists