lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhhvyhdvF-1LZNlu@google.com>
Date: Thu, 11 Apr 2024 16:18:34 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Xiong Zhang <xiong.y.zhang@...ux.intel.com>
Cc: pbonzini@...hat.com, peterz@...radead.org, mizhang@...gle.com, 
	kan.liang@...el.com, zhenyuw@...ux.intel.com, dapeng1.mi@...ux.intel.com, 
	jmattson@...gle.com, kvm@...r.kernel.org, linux-perf-users@...r.kernel.org, 
	linux-kernel@...r.kernel.org, zhiyuan.lv@...el.com, eranian@...gle.com, 
	irogers@...gle.com, samantha.alt@...el.com, like.xu.linux@...il.com, 
	chao.gao@...el.com
Subject: Re: [RFC PATCH 40/41] KVM: x86/pmu: Separate passthrough PMU logic in
 set/get_msr() from non-passthrough vPMU

On Fri, Jan 26, 2024, Xiong Zhang wrote:
> From: Mingwei Zhang <mizhang@...gle.com>
> 
> Separate passthrough PMU logic from non-passthrough vPMU code. There are
> two places in passthrough vPMU when set/get_msr() may call into the
> existing non-passthrough vPMU code: 1) set/get counters; 2) set global_ctrl
> MSR.
> 
> In the former case, non-passthrough vPMU will call into
> pmc_{read,write}_counter() which wires to the perf API. Update these
> functions to avoid the perf API invocation.
> 
> The 2nd case is where global_ctrl MSR writes invokes reprogram_counters()
> which will invokes the non-passthrough PMU logic. So use pmu->passthrough
> flag to wrap out the call.
> 
> Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
> ---
>  arch/x86/kvm/pmu.c |  4 +++-
>  arch/x86/kvm/pmu.h | 10 +++++++++-
>  2 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 9e62e96fe48a..de653a67ba93 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -652,7 +652,9 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		if (pmu->global_ctrl != data) {
>  			diff = pmu->global_ctrl ^ data;
>  			pmu->global_ctrl = data;
> -			reprogram_counters(pmu, diff);
> +			/* Passthrough vPMU never reprogram counters. */
> +			if (!pmu->passthrough)

This should probably be handled in reprogram_counters(), otherwise we'll be
playing whack-a-mole, e.g. this misses MSR_IA32_PEBS_ENABLE, which benign, but
only because PEBS isn't yet supported.

> +				reprogram_counters(pmu, diff);
>  		}
>  		break;
>  	case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index 0fc37a06fe48..ab8d4a8e58a8 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -70,6 +70,9 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc)
>  	u64 counter, enabled, running;
>  
>  	counter = pmc->counter;
> +	if (pmc_to_pmu(pmc)->passthrough)
> +		return counter & pmc_bitmask(pmc);

Won't perf_event always be NULL for mediated counters?  I.e. this can be dropped,
I think.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ