lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220714180547.GS1379820@ls.amr.corp.intel.com>
Date:   Thu, 14 Jul 2022 11:05:47 -0700
From:   Isaku Yamahata <isaku.yamahata@...il.com>
To:     Kai Huang <kai.huang@...el.com>
Cc:     isaku.yamahata@...el.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, isaku.yamahata@...il.com,
        Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH v7 036/102] KVM: x86/mmu: Allow non-zero value for
 non-present SPTE

On Thu, Jun 30, 2022 at 11:03:56PM +1200,
Kai Huang <kai.huang@...el.com> wrote:

> On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@...el.com wrote:
> > From: Sean Christopherson <sean.j.christopherson@...el.com>
> > 
> > TDX introduced a new ETP, Secure-EPT, in addition to the existing EPT.
> > Secure-EPT maps protected guest memory, which is called private. Since
> > Secure-EPT page tables is also protected, those page tables is also called
> > private.  The existing EPT is often called shared EPT to distinguish from
> > Secure-EPT.  And also page tables for share EPT is also called shared.
> 
> Does this patch has anything to do with secure-EPT?
> 
> > 
> > Virtualization Exception, #VE, is a new processor exception in VMX non-root
> 
> #VE isn't new.  It's already in pre-TDX public spec AFAICT.
> 
> > operation.  In certain virtualizatoin-related conditions, #VE is injected
> > into guest instead of exiting from guest to VMM so that guest is given a
> > chance to inspect it.  One important one is EPT violation.  When
> > "ETP-violation #VE" VM-execution is set, "#VE suppress bit" in EPT entry
> > is cleared, #VE is injected instead of EPT violation.
> 
> We already know such fact based on pre-TDX public spec.  Instead of repeating it
> here, why not focusing on saying what's new in TDX, so your below paragraph of
> setting a non-zero value for non-present SPTE can be justified?

Ok, will drop those two paragraph above.


> > Because guest memory is protected with TDX, VMM can't parse instructions
> > in the guest memory.  Instead, MMIO hypercall is used for guest to pass
> > necessary information to VMM.
> > 
> > To make unmodified device driver work, guest TD expects #VE on accessing
> > shared GPA.  The #VE handler converts MMIO access into MMIO hypercall with
> > the EPT entry of enabled "#VE" by clearing "suppress #VE" bit.  Before VMM
> > enabling #VE, it needs to figure out the given GPA is for MMIO by EPT
> > violation.  
> > 
> 
> As I said above, before here, you need to explain in TDX VMCS is controlled by
> the TDX module and it always sets the "EPT-violation #VE" in execution control
> bit.
> 
> > So the execution flow looks like
> > 
> > - Allocate unused shared EPT entry with suppress #VE bit set.
> > - EPT violation on that GPA.
> > - VMM figures out the faulted GPA is for MMIO.
> > - VMM clears the suppress #VE bit.
> > - Guest TD gets #VE, and converts MMIO access into MMIO hypercall.
> > - If the GPA maps guest memory, VMM resolves it with guest pages.
> > 
> > For both cases, SPTE needs suppress #VE" bit set initially when it
> > is allocated or zapped, therefore non-zero non-present value for SPTE
> > needs to be allowed.
> > 
> > This change requires to update FNAME(sync_page) for shadow EPT.
> > "if(!sp->spte[i])" in FNAME(sync_page) means that the spte entry is the
> > initial value.  With the introduction of shadow_nonpresent_value which can
> > be non-zero, it doesn't hold any more. Replace zero check with
> > "!is_shadow_present_pte() && !is_mmio_spte()".
> 
> I don't think you need to mention above paragraph.  It's absolutely unclear how
> is_mmio_spte() will be impacted by this patch by reading above paragraphs.
> 
> From the "execution flow" you mentioned above, you will change MMIO fault from
> EPT misconfiguration to EPT violation (in order to get #VE), so theoretically
> you may effectively disable MMIO caching, in which case, if I understand
> correctly, is_mmio_spte() always returns false.
> 
> I guess you can just change to check:
> 
> 	if (sp->spte[i] != shadow_nonpresent_value)
> 
> Anyway, IMO you can just comment in the code.
> 
> After all, what is shadow_nonpresent_value, given you haven't explained what it
> is?

I'll drop the paragraph and add a comment on the code.


> > TDP MMU uses REMOVED_SPTE = 0x5a0ULL as special constant to indicate the
> > intermediate value to indicate one thread is operating on it and the value
> > should be semi-arbitrary value.  For TDX (more correctly to use #VE), the
> > value should include suppress #VE value which is SHADOW_NONPRESENT_VALUE.
> 
> What is SHADOW_NONPRESENT_VALUE?
> 
> > Rename REMOVED_SPTE to __REMOVED_SPTE and define REMOVED_SPTE as
> > SHADOW_NONPRESENT_VALUE | REMOVED_SPTE to set suppress #VE bit.
> 
> Ditto. IMHO you don't even need to mention REMOVED_SPTE in changelog.



> > Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c         | 55 ++++++++++++++++++++++++++++++----
> >  arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
> >  arch/x86/kvm/mmu/spte.c        |  5 +++-
> >  arch/x86/kvm/mmu/spte.h        | 37 ++++++++++++++++++++---
> >  arch/x86/kvm/mmu/tdp_mmu.c     | 23 +++++++++-----
> >  5 files changed, 105 insertions(+), 18 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 51306b80f47c..f239b6cb5d53 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -668,6 +668,44 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> >  	}
> >  }
> >  
> > +static inline void kvm_init_shadow_page(void *page)
> > +{
> > +#ifdef CONFIG_X86_64
> > +	int ign;
> > +
> > +	WARN_ON_ONCE(shadow_nonpresent_value != SHADOW_NONPRESENT_VALUE);
> > +	asm volatile (
> > +		"rep stosq\n\t"
> > +		: "=c"(ign), "=D"(page)
> > +		: "a"(SHADOW_NONPRESENT_VALUE), "c"(4096/8), "D"(page)
> > +		: "memory"
> > +	);
> > +#else
> > +	BUG();
> > +#endif
> > +}
> > +
> > +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu)
> > +{
> > +	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache;
> > +	int start, end, i, r;
> > +	bool is_tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
> > +
> > +	if (is_tdp_mmu && shadow_nonpresent_value)
> > +		start = kvm_mmu_memory_cache_nr_free_objects(mc);
> > +
> > +	r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL);
> > +	if (r)
> > +		return r;
> > +
> > +	if (is_tdp_mmu && shadow_nonpresent_value) {
> > +		end = kvm_mmu_memory_cache_nr_free_objects(mc);
> > +		for (i = start; i < end; i++)
> > +			kvm_init_shadow_page(mc->objects[i]);
> > +	}
> 
> I think you can just extend this to legacy MMU too, but not only TDP MMU.
> 
> After all, before this patch, where have you declared that TDX only supports TDP
> MMU?  This is only enforced in:
> 
> 	[PATCH v7 043/102] KVM: x86/mmu: Focibly use TDP MMU for TDX
> 
> Which is 7 patches later.
> 
> Also, shadow_nonpresent_value is only used in couple of places, while
> SHADOW_NONPRESENT_VALUE is used directly in more places.  Does it make more
> sense to always use shadow_nonpresent_value, instead of using
> SHADOW_NONPRESENT_VALUE?
> 
> Similar to other shadow values, we can provide a function to let caller
> (VMX/SVM) to decide whether it wants to use non-zero for non-present SPTE.
> 
> 	void kvm_mmu_set_non_present_value(u64 value)
> 	{
> 		shadow_nonpresent_value = value;
> 	}

As you pointed out, those logic is independent of TDP MMU or legacy MMU.
So I'll remove is_tdp_mmu.I'll drop shadwo_nonpresent_value and use
SHADWO_NONPRESENT_VALUE.

-- 
Isaku Yamahata <isaku.yamahata@...il.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ