[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210225204749.1512652-1-seanjc@google.com>
Date: Thu, 25 Feb 2021 12:47:25 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Ben Gardon <bgardon@...gle.com>
Subject: [PATCH 00/24] KVM: x86/mmu: Introduce MMU_PRESENT and fix bugs
This series adds the simple idea of tagging shadow-present SPTEs with
a single bit, instead of looking for non-zero SPTEs that aren't MMIO and
aren't REMOVED. Doing so reduces KVM's code footprint by 2k bytes on
x86-64, and presumably adds a tiny performance boost in related paths.
But, actually adding MMU_PRESENT without breaking one flow or another is
a bit of a debacle. The main issue is that EPT doesn't have many low
available bits, and PAE doesn't have any high available bits. And, the
existing MMU_WRITABLE and HOST_WRITABLE flags aren't optional, i.e. are
needed for all flavors of paging. The solution I settled on is to let
make the *_WRITABLE bit configurable so that EPT can use high available
bits.
Of course, I forgot the above PAE restriction multiple times, and
journeyed down several dead ends. The most notable failed idea was
using the AD_* masks in bits 52 and 53 to denote shadow-present SPTEs.
That would have been quite clever as it would provide the same benefits
without burning another available bit.
Along the many failed attempts, I collected a variety of bug fixes and
cleanups, mostly things found by inspection after doing a deep dive to
figure out what I broke.
Sean Christopherson (24):
KVM: x86/mmu: Set SPTE_AD_WRPROT_ONLY_MASK if and only if PML is
enabled
KVM: x86/mmu: Check for shadow-present SPTE before querying A/D status
KVM: x86/mmu: Bail from fast_page_fault() if SPTE is not
shadow-present
KVM: x86/mmu: Disable MMIO caching if MMIO value collides with L1TF
KVM: x86/mmu: Retry page faults that hit an invalid memslot
KVM: x86/mmu: Don't install bogus MMIO SPTEs if MMIO caching is
disabled
KVM: x86/mmu: Handle MMIO SPTEs directly in mmu_set_spte()
KVM: x86/mmu: Drop redundant trace_kvm_mmu_set_spte() in the TDP MMU
KVM: x86/mmu: Rename 'mask' to 'spte' in MMIO SPTE helpers
KVM: x86/mmu: Stop using software available bits to denote MMIO SPTEs
KVM: x86/mmu: Add module param to disable MMIO caching (for testing)
KVM: x86/mmu: Rename and document A/D scheme for TDP SPTEs
KVM: x86/mmu: Use MMIO SPTE bits 53 and 52 for the MMIO generation
KVM: x86/mmu: Document dependency bewteen TDP A/D type and saved bits
KVM: x86/mmu: Move initial kvm_mmu_set_mask_ptes() call into MMU
proper
KVM: x86/mmu: Co-locate code for setting various SPTE masks
KVM: x86/mmu: Move logic for setting SPTE masks for EPT into the MMU
proper
KVM: x86/mmu: Make Host-writable and MMU-writable bit locations
dynamic
KVM: x86/mmu: Use high bits for host/mmu writable masks for EPT SPTEs
KVM: x86/mmu: Use a dedicated bit to track shadow/MMU-present SPTEs
KVM: x86/mmu: Tweak auditing WARN for A/D bits to !PRESENT (was MMIO)
KVM: x86/mmu: Use is_removed_spte() instead of open coded equivalents
KVM: x86/mmu: Use low available bits for removed SPTEs
KVM: x86/mmu: Dump reserved bits if they're detected on non-MMIO SPTE
Documentation/virt/kvm/locking.rst | 49 +++++----
arch/x86/include/asm/kvm_host.h | 3 -
arch/x86/kvm/mmu.h | 15 +--
arch/x86/kvm/mmu/mmu.c | 87 +++++++---------
arch/x86/kvm/mmu/mmu_internal.h | 16 +--
arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
arch/x86/kvm/mmu/spte.c | 157 ++++++++++++++++++++---------
arch/x86/kvm/mmu/spte.h | 135 +++++++++++++++++--------
arch/x86/kvm/mmu/tdp_mmu.c | 22 ++--
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 24 +----
arch/x86/kvm/x86.c | 3 -
12 files changed, 290 insertions(+), 225 deletions(-)
--
2.30.1.766.gb4fecdf3b7-goog
Powered by blists - more mailing lists