[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52bdeeec0dfbb74f90d656dbd93dc9c7bb30e84f.camel@intel.com>
Date: Mon, 19 May 2025 16:12:51 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "seanjc@...gle.com" <seanjc@...gle.com>, "Zhao, Yan Y"
<yan.y.zhao@...el.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "Chatre, Reinette" <reinette.chatre@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] KVM: x86/mmu: Add RET_PF_RETRY_INVALID_SLOT for fault
retry on invalid slot
On Mon, 2025-05-19 at 06:33 -0700, Sean Christopherson wrote:
> Was this hit by a real VMM? If so, why is a TDX VMM removing a memslot without
> kicking vCPUs out of KVM?
>
> Regardless, I would prefer not to add a new RET_PF_* flag for this. At a glance,
> KVM can simply drop and reacquire SRCU in the relevant paths.
During the initial debugging and kicking around stage, this is the first
direction we looked. But kvm_gmem_populate() doesn't have scru locked, so then
kvm_tdp_map_page() tries to unlock without it being held. (although that version
didn't check r == RET_PF_RETRY like you had). Yan had the following concerns and
came up with the version in this series, which we held review on for the list:
> However, upon further consideration, I am reluctant to implement this fix for
> the following reasons:
> - kvm_gmem_populate() already holds the kvm->slots_lock.
> - While retrying with srcu unlock and lock can workaround the
> KVM_MEMSLOT_INVALID deadlock, it results in each kvm_vcpu_pre_fault_memory()
> and tdx_handle_ept_violation() faulting with different memslot layouts.
I'm not sure why the second one is really a problem. For the first one I think
that path could just take the scru lock in the proper order with kvm-
>slots_lock? I need to stare at these locking rules each time, so low quality
suggestion. But that is the context.
Powered by blists - more mailing lists