[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8a1953a5-8486-4dd3-9d3d-1b7f142f1cab@zytor.com>
Date: Thu, 31 Jul 2025 00:24:02 -0700
From: Xin Li <xin@...or.com>
To: Chao Gao <chao.gao@...el.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-doc@...r.kernel.org, pbonzini@...hat.com, seanjc@...gle.com,
corbet@....net, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
luto@...nel.org, peterz@...radead.org, andrew.cooper3@...rix.com,
hch@...radead.org
Subject: Re: [PATCH v5 20/23] KVM: nVMX: Add FRED VMCS fields to nested VMX
context handling
On 7/23/2025 11:50 PM, Chao Gao wrote:
>> @@ -2578,6 +2588,17 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
>> vmcs_writel(GUEST_IDTR_BASE, vmcs12->guest_idtr_base);
>>
>> vmx_segment_cache_clear(vmx);
>> +
>> + if (nested_cpu_load_guest_fred_states(vmcs12)) {
>> + vmcs_write64(GUEST_IA32_FRED_CONFIG, vmcs12->guest_ia32_fred_config);
>> + vmcs_write64(GUEST_IA32_FRED_RSP1, vmcs12->guest_ia32_fred_rsp1);
>> + vmcs_write64(GUEST_IA32_FRED_RSP2, vmcs12->guest_ia32_fred_rsp2);
>> + vmcs_write64(GUEST_IA32_FRED_RSP3, vmcs12->guest_ia32_fred_rsp3);
>> + vmcs_write64(GUEST_IA32_FRED_STKLVLS, vmcs12->guest_ia32_fred_stklvls);
>> + vmcs_write64(GUEST_IA32_FRED_SSP1, vmcs12->guest_ia32_fred_ssp1);
>> + vmcs_write64(GUEST_IA32_FRED_SSP2, vmcs12->guest_ia32_fred_ssp2);
>> + vmcs_write64(GUEST_IA32_FRED_SSP3, vmcs12->guest_ia32_fred_ssp3);
>> + }
>
> I think we need to snapshot L1's FRED MSR values before nested VM entry and
> propagate them to GUEST_IA32_FRED* of VMCS02 for
> !nested_cpu_load_guest_fred_states(vmcs12) case. i.e., from guest's view,
> FRED MSRs shouldn't change across VM-entry if "Load guest FRED states" isn't
> set.
>
> Refer to the comment above 'pre_vmenter_debugctl' definition and also the
> CET implmenetation*.
>
> [*]: https://lore.kernel.org/kvm/20250704085027.182163-22-chao.gao@intel.com/
>
Nice catch, it really took me a while to understand the issue.
Powered by blists - more mailing lists