[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0feae675-3ccb-4d0e-b2cd-4477f9288058@redhat.com>
Date: Tue, 10 Sep 2024 12:13:49 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
"Zhao, Yan Y" <yan.y.zhao@...el.com>
Cc: "seanjc@...gle.com" <seanjc@...gle.com>, "Huang, Kai"
<kai.huang@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"isaku.yamahata@...il.com" <isaku.yamahata@...il.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"nik.borisov@...e.com" <nik.borisov@...e.com>
Subject: Re: [PATCH 19/21] KVM: TDX: Add an ioctl to create initial guest
memory
On 9/6/24 18:30, Edgecombe, Rick P wrote:
> /*
> * The case to care about here is a PTE getting zapped concurrently and
> * this function erroneously thinking a page is mapped in the mirror EPT.
> * The private mem zapping paths are already covered by other locks held
> * here, but grab an mmu read_lock to not trigger the assert in
> * kvm_tdp_mmu_gpa_is_mapped().
> */
>
> Yan, do you think it is sufficient?
If you're actually requiring that the other locks are sufficient, then
there can be no ENOENT.
Maybe:
/*
* The private mem cannot be zapped after kvm_tdp_map_page()
* because all paths are covered by slots_lock and the
* filemap invalidate lock. Check that they are indeed enough.
*/
if (IS_ENABLED(CONFIG_KVM_PROVE_MMU)) {
scoped_guard(read_lock, &kvm->mmu_lock) {
if (KVM_BUG_ON(kvm,
!kvm_tdp_mmu_gpa_is_mapped(vcpu, gpa)) {
ret = -EIO;
goto out;
}
}
}
Paolo
Powered by blists - more mailing lists