[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b9b4b80a3818e9ebb3cb1aec76d1a1083fb91c7c.camel@intel.com>
Date: Tue, 10 Feb 2026 22:46:02 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "Hansen, Dave" <dave.hansen@...el.com>, "seanjc@...gle.com"
<seanjc@...gle.com>, "bp@...en8.de" <bp@...en8.de>, "kas@...nel.org"
<kas@...nel.org>, "dave.hansen@...ux.intel.com"
<dave.hansen@...ux.intel.com>, "mingo@...hat.com" <mingo@...hat.com>,
"x86@...nel.org" <x86@...nel.org>, "tglx@...nel.org" <tglx@...nel.org>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>, "pbonzini@...hat.com"
<pbonzini@...hat.com>
CC: "ackerleytng@...gle.com" <ackerleytng@...gle.com>, "sagis@...gle.com"
<sagis@...gle.com>, "Li, Xiaoyao" <xiaoyao.li@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Zhao, Yan Y"
<yan.y.zhao@...el.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>, "Yamahata, Isaku"
<isaku.yamahata@...el.com>, "binbin.wu@...ux.intel.com"
<binbin.wu@...ux.intel.com>, "Annapurve, Vishal" <vannapurve@...gle.com>
Subject: Re: [RFC PATCH v5 16/45] x86/virt/tdx: Add
tdx_alloc/free_control_page() helpers
On Tue, 2026-02-10 at 14:19 -0800, Dave Hansen wrote:
> On 2/10/26 14:15, Edgecombe, Rick P wrote:
> > I'm wasn't familiar with atomic_dec_and_lock(). I'm guess the atomic
> > part doesn't cover both decrementing *and* taking the lock?
>
> Right. Only 1=>0 is under the lock. All other decs are outside the lock.
>
> It doesn't do the atomic and the lock "atomically together" somehow.
Sorry I am a bit confused. But I think the "1=>0 and lock" are atomic
together?
If so, I think we can avoid the "race" mentioned by Rick, which is handled
by "x86/virt/tdx: Optimize tdx_alloc/free_control_page() helpers".
Kirill described the race [*]. Quote it here:
---
Consider the following scenario
CPU0 CPU1
tdx_pamt_put()
atomic_dec_and_test() == true
tdx_pamt_get()
atomic_inc_not_zero() == false
tdx_pamt_add()
<takes pamt_lock>
// CPU0 never removed PAMTmemory
tdh_phymem_pamt_add() ==
HPA_RANGE_NOT_FREE
atomic_set(1);
<drops pamt_lock>
<takes pamt_lock>
// Lost the race to CPU1
atomic_read() > 0
<drop pamt_lock>
---
But with atomic_dec_and_lock() (assuming "1=>0 and lock" is atomic), I think
this race won't happen. In tdx_pamt_put() on CPU0, the lock will always be
grabbed when refcount becomes 0, so PAMT pages are guaranteed to be freed.
Therefore tdx_pamt_get() on CPU1 should never meet "HPA_RANGE_NOT_FREE".
[*]
https://lore.kernel.org/kvm/bfaswqmlsyycr3alibn6f422cjtpd6ybssjekvrrz4zdwgwfcz@pxy25ra4sln2/
Powered by blists - more mailing lists