lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9A8u3AqvUWc7pwL@google.com>
Date:   Tue, 24 Jan 2023 20:16:59 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Like Xu <like.xu.linux@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 3/8] KVM: x86/pmu: Rewrite reprogram_counters() to
 improve performance

On Fri, Nov 11, 2022, Like Xu wrote:
> From: Like Xu <likexu@...cent.com>
> 
> A valid pmc is always tested before using pmu->reprogram_pmi. Eliminate
> this part of the redundancy by setting the counter's bitmask directly,
> and in addition, trigger KVM_REQ_PMU only once to save more cpu cycles.

It's a little silly, but can you split this into two patches?  First optimize the
helper, then expose it in pmu.h.  The optimization stands on its own, whereas the
code movement is justified only by the incoming AMD PMU v2 support.

> Signed-off-by: Like Xu <likexu@...cent.com>
> ---
>  arch/x86/kvm/pmu.h           | 11 +++++++++++
>  arch/x86/kvm/vmx/pmu_intel.c | 12 ------------
>  2 files changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index 2b5376ba66ea..be552c8217a0 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -189,6 +189,17 @@ static inline void kvm_pmu_request_counter_reprogam(struct kvm_pmc *pmc)
>  	kvm_make_request(KVM_REQ_PMU, pmc->vcpu);
>  }
>  
> +static inline void reprogram_counters(struct kvm_pmu *pmu, u64 diff)
> +{
> +	int bit;
> +
> +	if (diff) {
> +		for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX)
> +			__set_bit(bit, pmu->reprogram_pmi);
> +		kvm_make_request(KVM_REQ_PMU, pmu_to_vcpu(pmu));
> +	}
> +}
> +
>  void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu);
>  void kvm_pmu_handle_event(struct kvm_vcpu *vcpu);
>  int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index 2f7cd388859c..db704eea2d7c 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -68,18 +68,6 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
>  	}
>  }
>  
> -static void reprogram_counters(struct kvm_pmu *pmu, u64 diff)
> -{
> -	int bit;
> -	struct kvm_pmc *pmc;
> -
> -	for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) {
> -		pmc = intel_pmc_idx_to_pmc(pmu, bit);
> -		if (pmc)
> -			kvm_pmu_request_counter_reprogam(pmc);
> -	}
> -}
> -
>  static bool intel_hw_event_available(struct kvm_pmc *pmc)
>  {
>  	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
> -- 
> 2.38.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ