lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Sep 2022 23:18:30 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Like Xu <like.xu.linux@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/7] KVM: x86/pmu: Defer reprogram_counter() to
 kvm_pmu_handle_event()

On Wed, Aug 31, 2022, Like Xu wrote:
> @@ -542,7 +541,9 @@ static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc,
>  static inline bool cpl_is_matched(struct kvm_pmc *pmc)
>  {
>  	bool select_os, select_user;
> -	u64 config = pmc->current_config;
> +	u64 config = pmc_is_gp(pmc) ? pmc->eventsel :
> +		(u64)fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl,
> +				      pmc->idx - INTEL_PMC_IDX_FIXED);

Don't use a ternary here, the same conditional exists immediately below, i.e.
this can simply be:

	u64 config;

	if (pmc_is_gp(pmc)) {
		config = pmc->eventsel;
		select_os = config & ARCH_PERFMON_EVENTSEL_OS;
		select_user = config & ARCH_PERFMON_EVENTSEL_USR;
	} else {
		config = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl,
					  pmc->idx - INTEL_PMC_IDX_FIXED);
		select_os = config & 0x1;
		select_user = config & 0x2;
	}

Note, there's no need to cast to a u64; fixed_ctr_ctrl is a u64, and even if it
gets temporarily truncated to a u8, the relevant path only checks bits 0 and 1.

>  
>  	if (pmc_is_gp(pmc)) {
>  		select_os = config & ARCH_PERFMON_EVENTSEL_OS;
> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index 5cc5721f260b..847e7112a5d3 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -183,7 +183,11 @@ static inline void kvm_init_pmu_capability(void)
>  					     KVM_PMC_MAX_FIXED);
>  }
>  
> -void reprogram_counter(struct kvm_pmc *pmc);
> +static inline void kvm_pmu_request_counter_reprogam(struct kvm_pmc *pmc)
> +{
> +	__set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi);

Given that everything else that accesses reprogram_pmi uses atomic accesses, please
make this atomic as well, i.e. use set_bit().  Even if reprogram_pmi is currently
guaranteed to be accessed only from the CPU that has loaded the vCPU (no idea if
that's true), I am planning on fixing the PMU filter (KVM doesn't force reprogramming
on filter changes) by setting all bits in reprogram_pmi from a separate CPU, at
which point this needs to be atomic.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ