lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABgObfYkLond4fvObybhn4pGcbATc5chRJtmxU2yE6rLG4PkeQ@mail.gmail.com>
Date: Sat, 14 Sep 2024 15:50:42 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] KVM: x86: MMU changes for 6.12

On Sat, Sep 14, 2024 at 3:14 AM Sean Christopherson <seanjc@...gle.com> wrote:
>
> The bulk of the changes are to clean up the thorny "unprotect and retry" mess
> that grew over time.  The other notable change is to support yielding in the
> shadow MMU when zapping rmaps (simply a historic oversight, AFAICT).

This conflicts with the "zap modified memslot only" series that is in kvm/next.

The resolution is nice since it's possible to reuse the new
kvm_unmap_gfn_range()

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 8cd758913282..1f59781351f9 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -7064,17 +7064,10 @@ static void kvm_mmu_zap_memslot_leafs(
         .end = slot->base_gfn + slot->npages,
         .may_block = true,
     };
-    bool flush = false;

     write_lock(&kvm->mmu_lock);

-    if (kvm_memslots_have_rmaps(kvm))
-        flush = kvm_handle_gfn_range(kvm, &range, kvm_zap_rmap);
-
-    if (tdp_mmu_enabled)
-        flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);
-
-    if (flush)
+    if (kvm_unmap_gfn_range(kvm, &range))
         kvm_flush_remote_tlbs_memslot(kvm, slot);

     write_unlock(&kvm->mmu_lock);

(Pardon the whitespace damage!)

Paolo

> The following changes since commit 47ac09b91befbb6a235ab620c32af719f8208399:
>
>   Linux 6.11-rc4 (2024-08-18 13:17:27 -0700)
>
> are available in the Git repository at:
>
>   https://github.com/kvm-x86/linux.git tags/kvm-x86-mmu-6.12
>
> for you to fetch changes up to 9a5bff7f5ec2383e3edac5eda561b52e267ccbb5:
>
>   KVM: x86/mmu: Use KVM_PAGES_PER_HPAGE() instead of an open coded equivalent (2024-09-09 20:22:08 -0700)
>
> ----------------------------------------------------------------
> KVM x86 MMU changes for 6.12:
>
>  - Overhaul the "unprotect and retry" logic to more precisely identify cases
>    where retrying is actually helpful, and to harden all retry paths against
>    putting the guest into an infinite retry loop.
>
>  - Add support for yielding, e.g. to honor NEED_RESCHED, when zapping rmaps in
>    the shadow MMU.
>
>  - Refactor pieces of the shadow MMU related to aging SPTEs in prepartion for
>    adding MGLRU support in KVM.
>
>  - Misc cleanups
>
> ----------------------------------------------------------------
> Sean Christopherson (33):
>       KVM: x86/mmu: Clean up function comments for dirty logging APIs
>       KVM: x86/mmu: Decrease indentation in logic to sync new indirect shadow page
>       KVM: x86/mmu: Drop pointless "return" wrapper label in FNAME(fetch)
>       KVM: x86/mmu: Reword a misleading comment about checking gpte_changed()
>       KVM: x86/mmu: Replace PFERR_NESTED_GUEST_PAGE with a more descriptive helper
>       KVM: x86/mmu: Trigger unprotect logic only on write-protection page faults
>       KVM: x86/mmu: Skip emulation on page fault iff 1+ SPs were unprotected
>       KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped
>       KVM: x86: Get RIP from vCPU state when storing it to last_retry_eip
>       KVM: x86: Store gpa as gpa_t, not unsigned long, when unprotecting for retry
>       KVM: x86/mmu: Apply retry protection to "fast nTDP unprotect" path
>       KVM: x86/mmu: Try "unprotect for retry" iff there are indirect SPs
>       KVM: x86: Move EMULTYPE_ALLOW_RETRY_PF to x86_emulate_instruction()
>       KVM: x86: Fold retry_instruction() into x86_emulate_instruction()
>       KVM: x86/mmu: Don't try to unprotect an INVALID_GPA
>       KVM: x86/mmu: Always walk guest PTEs with WRITE access when unprotecting
>       KVM: x86/mmu: Move event re-injection unprotect+retry into common path
>       KVM: x86: Remove manual pfn lookup when retrying #PF after failed emulation
>       KVM: x86: Check EMULTYPE_WRITE_PF_TO_SP before unprotecting gfn
>       KVM: x86: Apply retry protection to "unprotect on failure" path
>       KVM: x86: Update retry protection fields when forcing retry on emulation failure
>       KVM: x86: Rename reexecute_instruction()=>kvm_unprotect_and_retry_on_failure()
>       KVM: x86/mmu: Subsume kvm_mmu_unprotect_page() into the and_retry() version
>       KVM: x86/mmu: Detect if unprotect will do anything based on invalid_list
>       KVM: x86/mmu: WARN on MMIO cache hit when emulating write-protected gfn
>       KVM: x86/mmu: Move walk_slot_rmaps() up near for_each_slot_rmap_range()
>       KVM: x86/mmu: Plumb a @can_yield parameter into __walk_slot_rmaps()
>       KVM: x86/mmu: Add a helper to walk and zap rmaps for a memslot
>       KVM: x86/mmu: Honor NEED_RESCHED when zapping rmaps and blocking is allowed
>       KVM: x86/mmu: Morph kvm_handle_gfn_range() into an aging specific helper
>       KVM: x86/mmu: Fold mmu_spte_age() into kvm_rmap_age_gfn_range()
>       KVM: x86/mmu: Add KVM_RMAP_MANY to replace open coded '1' and '1ul' literals
>       KVM: x86/mmu: Use KVM_PAGES_PER_HPAGE() instead of an open coded equivalent
>
>  arch/x86/include/asm/kvm_host.h |  14 +-
>  arch/x86/kvm/mmu/mmu.c          | 522 ++++++++++++++++++++++------------------
>  arch/x86/kvm/mmu/mmu_internal.h |   3 +
>  arch/x86/kvm/mmu/mmutrace.h     |   1 +
>  arch/x86/kvm/mmu/paging_tmpl.h  |  63 ++---
>  arch/x86/kvm/mmu/tdp_mmu.c      |   6 +-
>  arch/x86/kvm/x86.c              | 133 +++-------
>  7 files changed, 368 insertions(+), 374 deletions(-)
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ