lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Aug 2022 14:44:19 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Yan Zhao <yan.y.zhao@...el.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, David Matlack <dmatlack@...gle.com>,
        Mingwei Zhang <mizhang@...gle.com>,
        Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH v3 5/8] KVM: x86/mmu: Set disallowed_nx_huge_page in TDP
 MMU before setting SPTE

On Tue, Aug 09, 2022, Paolo Bonzini wrote:
> On 8/9/22 05:26, Yan Zhao wrote:
> > hi Sean,
> > 
> > I understand this smp_rmb() is intended to prevent the reading of
> > p->nx_huge_page_disallowed from happening before it's set to true in
> > kvm_tdp_mmu_map(). Is this understanding right?
> > 
> > If it's true, then do we also need the smp_rmb() for read of sp->gfn in
> > handle_removed_pt()? (or maybe for other fields in sp in other places?)
> 
> No, in that case the barrier is provided by rcu_dereference().  In fact, I
> am not sure the barriers are needed in this patch either (but the comments
> are :)):

Yeah, I'm 99% certain the barriers aren't strictly required, but I didn't love the
idea of depending on other implementation details for the barriers.  Of course I
completely overlooked the fact that all other sp fields would need the same
barriers...

> - the write barrier is certainly not needed because it is implicit in
> tdp_mmu_set_spte_atomic's cmpxchg64
> 
> - the read barrier _should_ also be provided by rcu_dereference(pt), but I'm
> not 100% sure about that. The reasoning is that you have
> 
> (1)	iter->old spte = READ_ONCE(*rcu_dereference(iter->sptep));
> 	...
> (2)	tdp_ptep_t pt = spte_to_child_pt(old_spte, level);
> (3)	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));
> 	...
> (4)	if (sp->nx_huge_page_disallowed) {
> 
> and (4) is definitely ordered after (1) thanks to the READ_ONCE hidden
> within (3) and the data dependency from old_spte to sp.

Yes, I think that's correct.  Callers must verify the SPTE is present before getting
the associated child shadow page.  KVM does have instances where a shadow page is
retrieved from the SPTE _pointer_, but that's the parent shadow page, i.e. isn't
guarded by the SPTE being present.

	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep));

Something like this is as a separate patch?

diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
index f0af385c56e0..9d982ccf4567 100644
--- a/arch/x86/kvm/mmu/tdp_iter.h
+++ b/arch/x86/kvm/mmu/tdp_iter.h
@@ -13,6 +13,12 @@
  * to be zapped while holding mmu_lock for read, and to allow TLB flushes to be
  * batched without having to collect the list of zapped SPs.  Flows that can
  * remove SPs must service pending TLB flushes prior to dropping RCU protection.
+ *
+ * The READ_ONCE() ensures that, if the SPTE points at a child shadow page, all
+ * fields in struct kvm_mmu_page will be read after the caller observes the
+ * present SPTE (KVM must check that the SPTE is present before following the
+ * SPTE's pfn to its associated shadow page).  Pairs with the implicit memory
+ * barrier in tdp_mmu_set_spte_atomic().
  */
 static inline u64 kvm_tdp_mmu_read_spte(tdp_ptep_t sptep)
 {
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index bf2ccf9debca..ca50296e3696 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -645,6 +645,11 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
        lockdep_assert_held_read(&kvm->mmu_lock);

        /*
+        * The atomic CMPXCHG64 provides an implicit memory barrier and ensures
+        * that, if the SPTE points at a shadow page, all struct kvm_mmu_page
+        * fields are visible to readers before the SPTE is marked present.
+        * Pairs with ordering guarantees provided by kvm_tdp_mmu_read_spte().
+        *
         * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
         * does not hold the mmu_lock.
         */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ