[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aKNu6gYNO1j_Wpdj@google.com>
Date: Mon, 18 Aug 2025 11:20:26 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: kvm@...r.kernel.org, Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>, Thomas Gleixner <tglx@...utronix.de>,
Paolo Bonzini <pbonzini@...hat.com>, x86@...nel.org, Borislav Petkov <bp@...en8.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] KVM: x86: Fix the interaction between SMM and the
asynchronous pagefault
On Wed, Aug 13, 2025, Maxim Levitsky wrote:
> Currently a #SMI can cause KVM to drop an #APF ready event and
> subsequently causes the guest to never resume the task that is waiting
> for it.
> This can result in tasks becoming permanently stuck within the guest.
>
> This happens because KVM flushes the APF queue without notifying the guest
> of completed APF requests when the guest exits to real mode.
>
> And the SMM exit code calls kvm_set_cr0 with CR.PE == 0, which triggers
> this code.
>
> It must be noted that while this flush is reasonable to do for the actual
> real mode entry, it is actually achieves nothing because it is too late to
> flush this queue on SMM exit.
>
> To fix this, avoid doing this flush altogether, and handle the real
> mode entry/exits in the same way KVM already handles the APIC
> enable/disable events:
>
> APF completion events are not injected while APIC is disabled,
> and once APIC is re-enabled, KVM raises the KVM_REQ_APF_READY request
> which causes the first pending #APF ready event to be injected prior
> to entry to the guest mode.
>
> This change also has the side benefit of preserving #APF events if the
> guest temporarily enters real mode - for example, to call firmware -
> although such usage should be extermery rare in modern operating systems.
>
> Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> ---
> arch/x86/kvm/x86.c | 11 +++++++----
> arch/x86/kvm/x86.h | 1 +
> 2 files changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 3d45a4cd08a4..5dfe166025bf 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1118,15 +1118,18 @@ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned lon
> }
>
> if ((cr0 ^ old_cr0) & X86_CR0_PG) {
> - kvm_clear_async_pf_completion_queue(vcpu);
> - kvm_async_pf_hash_reset(vcpu);
> -
> /*
> * Clearing CR0.PG is defined to flush the TLB from the guest's
> * perspective.
> */
> if (!(cr0 & X86_CR0_PG))
> kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
> +
> + /*
> + * Re-check APF completion events, when the guest re-enables paging.
> + */
> + if ((cr0 & X86_CR0_PG) && kvm_pv_async_pf_enabled(vcpu))
I'm tempted to make this an elif, i.e.
if (!(cr0 & X86_CR0_PG))
kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
else if (kvm_pv_async_pf_enabled(vcpu))
kvm_make_request(KVM_REQ_APF_READY, vcpu);
In theory, that could set us up to fail if another CR0.PG=1 case is added, but I
like to think future us will be smart enough to turn it into:
if (!(cr0 & X86_CR0_PG)) {
kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
} else {
if (kvm_pv_async_pf_enabled(vcpu))
kvm_make_request(KVM_REQ_APF_READY, vcpu);
if (<other thing>)
...
}
> + kvm_make_request(KVM_REQ_APF_READY, vcpu);
> }
>
> if ((cr0 ^ old_cr0) & KVM_MMU_CR0_ROLE_BITS)
> @@ -3547,7 +3550,7 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> return 0;
> }
>
> -static inline bool kvm_pv_async_pf_enabled(struct kvm_vcpu *vcpu)
> +bool kvm_pv_async_pf_enabled(struct kvm_vcpu *vcpu)
This is in the same file, there's no reason/need to expose this via x86.h. The
overall diff is small enough that I'm comfortable hoisting this "up" as part of
the fix, especially since this needs to go to stable@.
If we use an elif, this?
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6bdf7ef0b535..2bc41e562314 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1030,6 +1030,13 @@ bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr)
}
EXPORT_SYMBOL_GPL(kvm_require_dr);
+static inline bool kvm_pv_async_pf_enabled(struct kvm_vcpu *vcpu)
+{
+ u64 mask = KVM_ASYNC_PF_ENABLED | KVM_ASYNC_PF_DELIVERY_AS_INT;
+
+ return (vcpu->arch.apf.msr_en_val & mask) == mask;
+}
+
static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu)
{
return vcpu->arch.reserved_gpa_bits | rsvd_bits(5, 8) | rsvd_bits(1, 2);
@@ -1122,15 +1129,15 @@ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned lon
}
if ((cr0 ^ old_cr0) & X86_CR0_PG) {
- kvm_clear_async_pf_completion_queue(vcpu);
- kvm_async_pf_hash_reset(vcpu);
-
/*
* Clearing CR0.PG is defined to flush the TLB from the guest's
- * perspective.
+ * perspective. If the guest is (re)enabling, check for async
+ * #PFs that were completed while paging was disabled.
*/
if (!(cr0 & X86_CR0_PG))
kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
+ else if (kvm_pv_async_pf_enabled(vcpu))
+ kvm_make_request(KVM_REQ_APF_READY, vcpu);
}
if ((cr0 ^ old_cr0) & KVM_MMU_CR0_ROLE_BITS)
@@ -3524,13 +3531,6 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return 0;
}
-static inline bool kvm_pv_async_pf_enabled(struct kvm_vcpu *vcpu)
-{
- u64 mask = KVM_ASYNC_PF_ENABLED | KVM_ASYNC_PF_DELIVERY_AS_INT;
-
- return (vcpu->arch.apf.msr_en_val & mask) == mask;
-}
-
static int kvm_pv_enable_async_pf(struct kvm_vcpu *vcpu, u64 data)
{
gpa_t gpa = data & ~0x3f;
Powered by blists - more mailing lists