lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJYCtDN+ITmrgCUs@google.com>
Date:   Fri, 23 Jun 2023 13:38:12 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Xiong Zhang <xiong.y.zhang@...el.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        pbonzini@...hat.com, peterz@...radead.org, like.xu.linux@...il.com,
        kan.liang@...ux.intel.com, zhenyuw@...ux.intel.com,
        zhiyuan.lv@...el.com
Subject: Re: [PATCH 3/4] KVM: VMX/pmu: Enable inactive vLBR event in guest LBR
 MSR emulation

On Fri, Jun 16, 2023, Xiong Zhang wrote:
> vLBR event could be inactive in two case:
> a. host per cpu pinned LBR event occupy LBR when vLBR event is created
> b. vLBR event is preempted by host per cpu pinned LBR event during vm
> exit handler.
> When vLBR event is inactive, guest couldn't access LBR msr, and it is
> forced into error state and is excluded from schedule by perf scheduler.
> So vLBR event couldn't be active through perf scheduler even if host per
> cpu pinned LBR event has released LBR, kvm could enable vLBR event
> proactively, then vLBR event may be active and LBR msr could be passthrough
> into guest.
> 
> Signed-off-by: Xiong Zhang <xiong.y.zhang@...el.com>
> ---
>  arch/x86/kvm/vmx/pmu_intel.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index 741efe2c497b..5a3ab8c8711b 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -314,7 +314,16 @@ static bool intel_pmu_handle_lbr_msrs_access(struct kvm_vcpu *vcpu,
>  	if (!intel_pmu_is_valid_lbr_msr(vcpu, index))
>  		return false;
>  
> -	if (!lbr_desc->event && intel_pmu_create_guest_lbr_event(vcpu) < 0)
> +	/* vLBR event may be inactive, but physical LBR may be free now.

	/*
	 * This is the preferred block comment style.
	 */

> +	 * but vLBR event is pinned event, once it is inactive state, perf
> +	 * will force it to error state in merge_sched_in() and exclude it from
> +	 * perf schedule, so even if LBR is free now, vLBR event couldn't be active
> +	 * through perf scheduler and vLBR event could be active through
> +	 * perf_event_enable().
> +	 */

Trimming that down, is this what you mean?

	/*
	 * Attempt to re-enable the vLBR event if it was disabled due to
	 * contention with host LBR usage, i.e. was put into an error state.
	 * Perf doesn't notify KVM if the host stops using LBRs, i.e. KVM needs
	 * to manually re-enable the event.
	 */

Which begs the question, why can't there be a notification of some form that the
LBRs are once again available?

Assuming that's too difficult for whatever reason, why wait until the guest tries
to read LBRs?  E.g. why not be more aggressive and try to re-enable vLBRs on every
VM-Exit.

And if we do wait until the guest touches relevant MSRs, shouldn't writes to
DEBUG_CTL that set DEBUGCTLMSR_LBR also try to re-enable the event?

Lastly, what guarantees that the MSRs hold guest data?  I assume perf purges the
MSRs at some point, but it would be helpful to call that out in the changelog.

> +	if (lbr_desc->event && (lbr_desc->event->state == PERF_EVENT_STATE_ERROR))
> +		perf_event_enable(lbr_desc->event);
> +	else if (!lbr_desc->event && intel_pmu_create_guest_lbr_event(vcpu) < 0)
>  		goto dummy;
>  
>  	/*
> -- 
> 2.25.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ