lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aIwOmEzLgkP-9ZDE@google.com>
Date: Thu, 31 Jul 2025 17:47:20 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Xin Li <xin@...or.com>
Cc: Chao Gao <chao.gao@...el.com>, linux-kernel@...r.kernel.org, kvm@...r.kernel.org, 
	pbonzini@...hat.com, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, 
	dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com
Subject: Re: [PATCH v1 2/4] KVM: x86: Introduce MSR read/write emulation helpers

On Thu, Jul 31, 2025, Xin Li wrote:
> On 7/31/2025 3:34 AM, Chao Gao wrote:
> > > -fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> > > +static fastpath_t handle_set_msr_irqoff(struct kvm_vcpu *vcpu, u32 msr, int reg)
> > 
> > How about __handle_fastpath_set_msr_irqoff()? It's better to keep
> > "fastpath" in the function name to convey that this function is for
> > fastpath only.
> 
> This is now a static function with return type fastpath_t, so I guess
> it's okay to remove fastpath from its name (It looks that Sean prefers
> shorter function names if they contains enough information).
> 
> But if the protocol is to have "fastpath" in all fast path function
> names, I can change it.

I'm also greedy and want it both ways :-)

Spoiler alert, this is what I ended up with (completely untested at this point):

static fastpath_t __handle_fastpath_wrmsr(struct kvm_vcpu *vcpu, u32 msr,
					  u64 data)

	switch (msr) {
	case APIC_BASE_MSR + (APIC_ICR >> 4):
		if (!lapic_in_kernel(vcpu) || !apic_x2apic_mode(vcpu->arch.apic) ||
		    kvm_x2apic_icr_write_fast(vcpu->arch.apic, data))
			return EXIT_FASTPATH_NONE;
		break;
	case MSR_IA32_TSC_DEADLINE:
		if (!kvm_can_use_hv_timer(vcpu))
			return EXIT_FASTPATH_NONE;

		kvm_set_lapic_tscdeadline_msr(vcpu, data);
		break;
	default:
		return EXIT_FASTPATH_NONE;
	}

	trace_kvm_msr_write(msr, data);

	if (!kvm_skip_emulated_instruction(vcpu))
		return EXIT_FASTPATH_EXIT_USERSPACE;

	return EXIT_FASTPATH_REENTER_GUEST;
}

fastpath_t handle_fastpath_wrmsr(struct kvm_vcpu *vcpu)
{
	return __handle_fastpath_wrmsr(vcpu, kvm_rcx_read(vcpu),
				       kvm_read_edx_eax(vcpu));
}
EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);

fastpath_t handle_fastpath_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg)
{
	return __handle_fastpath_wrmsr(vcpu, msr, kvm_register_read(vcpu, reg));
}
EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_imm_irqoff);


> > > {
> > > -	u32 msr = kvm_rcx_read(vcpu);
> > > 	u64 data;
> > > 	fastpath_t ret;
> > > 	bool handled;
> > > @@ -2174,11 +2190,19 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> > > 
> > > 	switch (msr) {
> > > 	case APIC_BASE_MSR + (APIC_ICR >> 4):
> > > -		data = kvm_read_edx_eax(vcpu);
> > > +		if (reg == VCPU_EXREG_EDX_EAX)
> > > +			data = kvm_read_edx_eax(vcpu);
> > > +		else
> > > +			data = kvm_register_read(vcpu, reg);
> > 
> > ...
> > 
> > > +
> > > 		handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
> > > 		break;
> > > 	case MSR_IA32_TSC_DEADLINE:
> > > -		data = kvm_read_edx_eax(vcpu);
> > > +		if (reg == VCPU_EXREG_EDX_EAX)
> > > +			data = kvm_read_edx_eax(vcpu);
> > > +		else
> > > +			data = kvm_register_read(vcpu, reg);
> > > +
> > 
> > Hoist this chunk out of the switch clause to avoid duplication.
> 
> I thought about it, but didn't do so because the original code doesn't read
> the MSR data from registers when a MSR is not being handled in the
> fast path, which saves some cycles in most cases.

Can you hold off on doing anything with this series?  Mostly to save your time.

Long story short, I unexpectedly dove into the fastpath code this week while sorting
out an issue with the mediated PMU series, and I ended up with a series of patches
to clean things up for both the mediated PMU series and for this series.

With luck, I'll get the cleanups, the mediated PMU series, and a v2 of this series
posted tomorrow (I also have some feedback on VCPU_EXREG_EDX_EAX; we can avoid it
entirely without much fuss).


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ