[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <abdf8379-1b34-4534-b8c9-d5ef55635bc0@intel.com>
Date: Thu, 12 Sep 2024 10:55:39 +1200
From: "Huang, Kai" <kai.huang@...el.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>, "seanjc@...gle.com"
<seanjc@...gle.com>
CC: "Yao, Yuan" <yuan.yao@...el.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "isaku.yamahata@...il.com"
<isaku.yamahata@...il.com>, "Zhao, Yan Y" <yan.y.zhao@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>, "kvm@...r.kernel.org"
<kvm@...r.kernel.org>, "nik.borisov@...e.com" <nik.borisov@...e.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>
Subject: Re: [PATCH 09/21] KVM: TDX: Retry seamcall when TDX_OPERAND_BUSY with
operand SEPT
On 11/09/2024 2:48 pm, Edgecombe, Rick P wrote:
> On Wed, 2024-09-11 at 13:17 +1200, Huang, Kai wrote:
>>> is the VM-Enter
>>> error uniquely identifiable,
>>
>> When zero-step mitigation is active in the module, TDH.VP.ENTER tries to
>> grab the SEPT lock thus it can fail with SEPT BUSY error. But if it
>> does grab the lock successfully, it exits to VMM with EPT violation on
>> that GPA immediately.
>>
>> In other words, TDH.VP.ENTER returning SEPT BUSY means "zero-step
>> mitigation" must have been active.
>
> I think this isn't true. A sept locking related busy, maybe. But there are other
> things going on that return BUSY.
I thought we are talking about SEPT locking here. For BUSY in general
yeah it tries to grab other locks too (e.g., share lock of
TDR/TDCS/TDVPS etc) but those are impossible to contend in the current
KVM TDX implementation I suppose? Perhaps we need to look more to make
sure.
>
>> A normal EPT violation _COULD_ mean
>> mitigation is already active, but AFAICT we don't have a way to tell
>> that in the EPT violation.
>>
>>> and can KVM rely on HOST_PRIORITY to be set if KVM
>>> runs afoul of the zero-step mitigation?
>>
>> I think HOST_PRIORITY is always set if SEPT SEAMCALLs fails with BUSY.
>
> What led you to think this? It seemed more limited to me.
I interpreted from the spec (chapter 18.1.4 Concurrency Restrictions
with Host Priority). But looking at the module public code, it seems
only when the lock can be contended from the guest the HOST_PRIORITY
will be set when host fails to grab the lock (see
acquire_sharex_lock_hp_ex() and acquire_sharex_lock_hp_sh()), which
makes sense anyway.
Powered by blists - more mailing lists