[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230825020733.2849862-1-seanjc@google.com>
Date: Thu, 24 Aug 2023 19:07:31 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Yan Zhao <yan.y.zhao@...el.com>
Subject: [PATCH 0/2] KVM: Pre-check mmu_notifier retry on x86
Pre-check for an mmu_notifier retry on x86 to avoid contending mmu_lock,
which is quite problematic on preemptible kernels due to the way x86's TDP
MMU reacts to mmu_lock contentions. If mmu_lock contention is detected when
zapping SPTEs for an mmu_notifier invalidation, the TDP MMU drops mmu_lock
and yields.
The idea behind yielding is to let vCPUs that are trying to fault-in memory
make forward progress while the invalidation is ongoing. This works
because x86 uses the precise(ish) version of retry which checks for hva
overlap. At least, it works so long as vCPUs are hitting the region that's
being zapped.
Yielding turns out to be really bad when the vCPU is trying to fault-in a
page that *is* covered by the invalidation, because the vCPU ends up
retrying over and over, which puts mmu_lock under constant contention, and
ultimately causes the invalidation to take much longer due to the zapping
task constantly yielding. And in the worst case scenario, if the
invalidation is finding SPTEs to zap, every yield will trigger a remote
(*cough* VM-wide) TLB flush.
Sean Christopherson (2):
KVM: Allow calling mmu_invalidate_retry_hva() without holding mmu_lock
KVM: x86/mmu: Retry fault before acquiring mmu_lock if mapping is
changing
arch/x86/kvm/mmu/mmu.c | 3 +++
include/linux/kvm_host.h | 17 ++++++++++++++---
2 files changed, 17 insertions(+), 3 deletions(-)
base-commit: fff2e47e6c3b8050ca26656693caa857e3a8b740
--
2.42.0.rc2.253.gd59a3bf2b4-goog
Powered by blists - more mailing lists