[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0676c7b-9e6d-4af4-87d5-f822ab247730@intel.com>
Date: Wed, 30 Apr 2025 10:15:05 +0800
From: Zhiquan Li <zhiquan1.li@...el.com>
To: Dave Hansen <dave.hansen@...el.com>, Jun Miao <jun.miao@...el.com>,
<kirill.shutemov@...ux.intel.com>, <dave.hansen@...ux.intel.com>
CC: <x86@...nel.org>, <linux-coco@...ts.linux.dev>,
<linux-kernel@...r.kernel.org>, <tglx@...utronix.de>, <mingo@...hat.com>,
<bp@...en8.de>, "Du, Fan" <fan.du@...el.com>
Subject: Re: [V2 PATCH] x86/tdx: add VIRT_CPUID2 virtualization if REDUCE_VE
was not successful
On 2025/4/29 22:50, Dave Hansen wrote:
> On 4/29/25 07:31, Jun Miao wrote:
>> REDUCE_VE can only be enabled if x2APIC_ID has been properly configured
>> with unique values for each VCPU. Check if VMM has provided an activated
>> topology configuration first as it is the prerequisite of REDUCE_VE and
>> ENUM_TOPOLOGY, so move it to reduce_unnecessary_ve(). The function
>> enable_cpu_topology_enumeration() was very little and can be
>> integrated into reduce_unnecessary_ve().
>
> Isn't this just working around VMM bugs? Shouldn't we just panic as
> quickly as possible so the VMM config gets fixed rather than adding kludges?
Now failed to virtualize these two cases will cause TD VM regression vs
legacy VM. Do you mean the panic will just for the #VE caused by CPUID
leaf 0x2? Or both (+ VMM not configure topology) will panic?
Currently the most customer's complaints come from the CPUID leaf 0x2
not virtualization, and most of access come from user space. Is it
appropriate for such behavior directly cause a guest kernel panic?
Thanks,
Zhiquan
Powered by blists - more mailing lists