lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aTheSQb9fhXmZKw6@google.com>
Date: Tue, 9 Dec 2025 09:37:13 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Dapeng Mi <dapeng1.mi@...ux.intel.com>
Cc: Marc Zyngier <maz@...nel.org>, Oliver Upton <oupton@...nel.org>, 
	Tianrui Zhao <zhaotianrui@...ngson.cn>, Bibo Mao <maobibo@...ngson.cn>, 
	Huacai Chen <chenhuacai@...nel.org>, Anup Patel <anup@...infault.org>, 
	Paul Walmsley <pjw@...nel.org>, Palmer Dabbelt <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>, 
	Xin Li <xin@...or.com>, "H. Peter Anvin" <hpa@...or.com>, Andy Lutomirski <luto@...nel.org>, 
	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, 
	Paolo Bonzini <pbonzini@...hat.com>, linux-arm-kernel@...ts.infradead.org, 
	kvmarm@...ts.linux.dev, kvm@...r.kernel.org, loongarch@...ts.linux.dev, 
	kvm-riscv@...ts.infradead.org, linux-riscv@...ts.infradead.org, 
	linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org, 
	Mingwei Zhang <mizhang@...gle.com>, Xudong Hao <xudong.hao@...el.com>, 
	Sandipan Das <sandipan.das@....com>, Xiong Zhang <xiong.y.zhang@...ux.intel.com>, 
	Manali Shukla <manali.shukla@....com>, Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH v6 37/44] KVM: VMX: Dedup code for removing MSR from
 VMCS's auto-load list

On Mon, Dec 08, 2025, Dapeng Mi wrote:
> 
> On 12/6/2025 8:17 AM, Sean Christopherson wrote:
> > Add a helper to remove an MSR from an auto-{load,store} list to dedup the
> > msr_autoload code, and in anticipation of adding similar functionality for
> > msr_autostore.
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> > ---
> >  arch/x86/kvm/vmx/vmx.c | 31 ++++++++++++++++---------------
> >  1 file changed, 16 insertions(+), 15 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 52bcb817cc15..a51f66d1b201 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -1040,9 +1040,22 @@ static int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr)
> >  	return -ENOENT;
> >  }
> >  
> > +static void vmx_remove_auto_msr(struct vmx_msrs *m, u32 msr,
> > +				unsigned long vmcs_count_field)
> > +{
> > +	int i;
> > +
> > +	i = vmx_find_loadstore_msr_slot(m, msr);
> > +	if (i < 0)
> > +		return;
> > +
> > +	--m->nr;
> > +	m->val[i] = m->val[m->nr];
> 
> Sometimes the order of MSR writing does matter, e.g., PERF_GLOBAL_CTRL MSR
> should be written at last after all PMU MSR writing.

Hmm, no.  _If_ KVM were writing event selectors using the auto-load lists, then
KVM would need to bookend the event selector MSRs with PERF_GLOBAL_CTRL=0 and
PERF_GLOBAL_CTRL=<new context (guest vs. host)>.  E.g. so that guest PMC counts
aren't polluted with host events, and vice versa.

As things stand today, the only other MSRs are PEBS and the DS area configuration
stuff, and kinda to my earlier point, KVM pre-zeroes MSR_IA32_PEBS_ENABLE as part
of add_atomic_switch_msr() to ensure a quiescent period before VM-Enter.

Heh, and writing PERF_GLOBAL_CTRL last for that sequence might actually be
problematic.  E.g. load host PEBS with guest PERF_GLOBAL_CTRL active.

Anyways, I agree that this might be brittle, but this is all pre-existing behavior
so I don't want to tackle that here unless it's absolutely necessary.

Or wait, by "writing" do you mean "writing MSRs to memory", as opposed to "writing
values to MSRs"?  Regardless, I think my answer is the same: this isn't a problem
today, so I'd prefer to not shuffle the ordering unless it's absolutely necessary.

> So directly moving the last MSR entry into cleared one could break the MSR
> writing sequence and may cause issue in theory.
> 
> I know this won't really cause issue since currently vPMU won't use the MSR
> auto-load feature to save any PMU MSR, but it's still unsafe for future uses. 
> 
> I'm not sure if it's worthy to do the strict MSR entry shift right now.
>
> Perhaps we could add a message to warn users at least.

Hmm, yeah, but I'm not entirely sure where/how best to document this.  Because
it's not just that vmx_remove_auto_msr() arbitrarily maniuplates the order, e.g.
multiple calls to vmx_add_auto_msr() aren't guaranteed to provide ordering because
one or more MSRs may already be in the list.  And the "special" MSRs that can be
switched via dedicated VMCS fields further muddy the waters.

So I'm tempted to not add a comment to the helpers, or even the struct fields,
because unfortunately it's largely a "Here be dragons!" type warning. :-/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ