[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2440b9bf-a2a1-4f66-94b2-71f47d62f3db@linux.intel.com>
Date: Mon, 8 Dec 2025 17:29:53 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>, Marc Zyngier <maz@...nel.org>,
Oliver Upton <oupton@...nel.org>, Tianrui Zhao <zhaotianrui@...ngson.cn>,
Bibo Mao <maobibo@...ngson.cn>, Huacai Chen <chenhuacai@...nel.org>,
Anup Patel <anup@...infault.org>, Paul Walmsley <pjw@...nel.org>,
Palmer Dabbelt <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>,
Xin Li <xin@...or.com>, "H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>, Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
kvm@...r.kernel.org, loongarch@...ts.linux.dev,
kvm-riscv@...ts.infradead.org, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
Mingwei Zhang <mizhang@...gle.com>, Xudong Hao <xudong.hao@...el.com>,
Sandipan Das <sandipan.das@....com>,
Xiong Zhang <xiong.y.zhang@...ux.intel.com>,
Manali Shukla <manali.shukla@....com>, Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH v6 37/44] KVM: VMX: Dedup code for removing MSR from
VMCS's auto-load list
On 12/6/2025 8:17 AM, Sean Christopherson wrote:
> Add a helper to remove an MSR from an auto-{load,store} list to dedup the
> msr_autoload code, and in anticipation of adding similar functionality for
> msr_autostore.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 31 ++++++++++++++++---------------
> 1 file changed, 16 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 52bcb817cc15..a51f66d1b201 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1040,9 +1040,22 @@ static int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr)
> return -ENOENT;
> }
>
> +static void vmx_remove_auto_msr(struct vmx_msrs *m, u32 msr,
> + unsigned long vmcs_count_field)
> +{
> + int i;
> +
> + i = vmx_find_loadstore_msr_slot(m, msr);
> + if (i < 0)
> + return;
> +
> + --m->nr;
> + m->val[i] = m->val[m->nr];
Sometimes the order of MSR writing does matter, e.g., PERF_GLOBAL_CTRL MSR
should be written at last after all PMU MSR writing. So directly moving the
last MSR entry into cleared one could break the MSR writing sequence and
may cause issue in theory.
I know this won't really cause issue since currently vPMU won't use the MSR
auto-load feature to save any PMU MSR, but it's still unsafe for future uses.
I'm not sure if it's worthy to do the strict MSR entry shift right now.
Perhaps we could add a message to warn users at least.
Thanks.
> + vmcs_write32(vmcs_count_field, m->nr);
> +}
> +
> static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
> {
> - int i;
> struct msr_autoload *m = &vmx->msr_autoload;
>
> switch (msr) {
> @@ -1063,21 +1076,9 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
> }
> break;
> }
> - i = vmx_find_loadstore_msr_slot(&m->guest, msr);
> - if (i < 0)
> - goto skip_guest;
> - --m->guest.nr;
> - m->guest.val[i] = m->guest.val[m->guest.nr];
> - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
>
> -skip_guest:
> - i = vmx_find_loadstore_msr_slot(&m->host, msr);
> - if (i < 0)
> - return;
> -
> - --m->host.nr;
> - m->host.val[i] = m->host.val[m->host.nr];
> - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
> + vmx_remove_auto_msr(&m->guest, msr, VM_ENTRY_MSR_LOAD_COUNT);
> + vmx_remove_auto_msr(&m->host, msr, VM_EXIT_MSR_LOAD_COUNT);
> }
>
> static __always_inline void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
Powered by blists - more mailing lists