[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCsy-m_esVjy8Pey@google.com>
Date: Mon, 19 May 2025 06:33:14 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: pbonzini@...hat.com, reinette.chatre@...el.com, rick.p.edgecombe@...el.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 1/2] KVM: x86/mmu: Add RET_PF_RETRY_INVALID_SLOT for fault
retry on invalid slot
On Mon, May 19, 2025, Yan Zhao wrote:
> Introduce a new return value RET_PF_RETRY_INVALID_SLOT to inform callers of
> kvm_mmu_do_page_fault() that a fault retry is due to an invalid memslot.
> This helps prevent deadlocks when a memslot is removed during pre-faulting
> GPAs in the memslot or local retry of faulting private pages in TDX.
>
> Take pre-faulting as an example.
>
> During ioctl KVM_PRE_FAULT_MEMORY, kvm->srcu is acquired around the
> pre-faulting of the entire range. For x86, kvm_arch_vcpu_pre_fault_memory()
> further invokes kvm_tdp_map_page(), which retries kvm_mmu_do_page_fault()
> if the return value is RET_PF_RETRY.
>
> If a memslot is deleted during the ioctl KVM_PRE_FAULT_MEMORY, after
> kvm_invalidate_memslot() marks a slot as invalid and makes it visible via
> rcu_assign_pointer() in kvm_swap_active_memslots(), kvm_mmu_do_page_fault()
> may encounter an invalid slot and return RET_PF_RETRY. Consequently,
> kvm_tdp_map_page() will then retry without releasing the srcu lock.
> Meanwhile, synchronize_srcu_expedited() in kvm_swap_active_memslots() is
> blocked, waiting for kvm_vcpu_pre_fault_memory() to release the srcu lock,
> leading to a deadlock.
Probably worth calling out that KVM will respond to signals, i.e. there's no risk
to the host kernel.
> "slot deleting" thread "prefault" thread
> ----------------------------- ----------------------
> srcu_read_lock();
> (A)
> invalid_slot->flags |= KVM_MEMSLOT_INVALID;
> rcu_assign_pointer();
>
> kvm_tdp_map_page();
> (B)
> do {
> r = kvm_mmu_do_page_fault();
>
> (C) synchronize_srcu_expedited();
>
> } while (r == RET_PF_RETRY);
>
> (D) srcu_read_unlock();
>
> As shown in diagram, (C) is waiting for (D). However, (B) continuously
> finds an invalid slot before (C) completes, causing (B) to retry and
> preventing (D) from being invoked.
>
> The local retry code in TDX's EPT violation handler faces a similar issue,
> where a deadlock can occur when faulting a private GFN in a slot that is
> concurrently being removed.
>
> To resolve the deadlock, introduce a new return value
> RET_PF_RETRY_INVALID_SLOT and modify kvm_mmu_do_page_fault() to return
> RET_PF_RETRY_INVALID_SLOT instead of RET_PF_RETRY when encountering an
> invalid memslot. This prevents endless retries in kvm_tdp_map_page() or
> tdx_handle_ept_violation(), allowing the srcu to be released and enabling
> slot removal to proceed.
>
> As all callers of kvm_tdp_map_page(), i.e.,
> kvm_arch_vcpu_pre_fault_memory() or tdx_gmem_post_populate(), are in
> pre-fault path, treat RET_PF_RETRY_INVALID_SLOT the same as RET_PF_EMULATE
> to return -ENOENT in kvm_tdp_map_page() to enable userspace to be aware of
> the slot removal.
Userspace should already be "aware" of the slot removal.
> Returning RET_PF_RETRY_INVALID_SLOT in kvm_mmu_do_page_fault() does not
> affect kvm_mmu_page_fault() and kvm_arch_async_page_ready(), as their
> callers either only check if the return value > 0 to re-enter vCPU for
> retry or do not check return value.
>
> Reported-by: Reinette Chatre <reinette.chatre@...el.com>
Was this hit by a real VMM? If so, why is a TDX VMM removing a memslot without
kicking vCPUs out of KVM?
Regardless, I would prefer not to add a new RET_PF_* flag for this. At a glance,
KVM can simply drop and reacquire SRCU in the relevant paths.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index cbc84c6abc2e..ceab756052eb 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4866,7 +4866,12 @@ int kvm_tdp_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code, u8 *level
return -EIO;
cond_resched();
+
r = kvm_mmu_do_page_fault(vcpu, gpa, error_code, true, NULL, level);
+ if (r == RET_PF_RETRY) {
+ kvm_vcpu_srcu_read_unlock(vcpu);
+ kvm_vcpu_srcu_read_lock(vcpu);
+ }
} while (r == RET_PF_RETRY);
if (r < 0)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index b952bc673271..e29966ce3ab7 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1920,6 +1920,9 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
break;
}
+ kvm_vcpu_srcu_read_unlock(vcpu);
+ kvm_vcpu_srcu_read_lock(vcpu);
+
cond_resched();
}
return ret;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index b24db92e98f3..21a3fa7476dd 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4266,7 +4266,6 @@ static int kvm_vcpu_ioctl_get_stats_fd(struct kvm_vcpu *vcpu)
static int kvm_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
struct kvm_pre_fault_memory *range)
{
- int idx;
long r;
u64 full_size;
@@ -4279,7 +4278,7 @@ static int kvm_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
return -EINVAL;
vcpu_load(vcpu);
- idx = srcu_read_lock(&vcpu->kvm->srcu);
+ kvm_vcpu_srcu_read_lock(vcpu);
full_size = range->size;
do {
@@ -4300,7 +4299,7 @@ static int kvm_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
cond_resched();
} while (range->size);
- srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ kvm_vcpu_srcu_read_unlock(vcpu);
vcpu_put(vcpu);
/* Return success if at least one page was mapped successfully. */
Powered by blists - more mailing lists