lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4281eee7-6423-4ec8-bb18-c6aeee1faf2c@linux.intel.com>
Date:   Wed, 8 Nov 2023 11:06:31 -0500
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Dapeng Mi <dapeng1.mi@...ux.intel.com>,
        Jim Mattson <jmattson@...gle.com>,
        Jinrong Liang <cloudliang@...cent.com>,
        Aaron Lewis <aaronlewis@...gle.com>,
        Like Xu <likexu@...cent.com>
Subject: Re: [PATCH v7 03/19] KVM: x86/pmu: Remove KVM's enumeration of
 Intel's architectural encodings



On 2023-11-07 7:31 p.m., Sean Christopherson wrote:
> Drop KVM's enumeration of Intel's architectural event encodings, and
> instead open code the three encodings (of which only two are real) that
> KVM uses to emulate fixed counters.  Now that KVM doesn't incorrectly
> enforce the availability of architectural encodings, there is no reason
> for KVM to ever care about the encodings themselves, at least not in the
> current format of an array indexed by the encoding's position in CPUID.
> 
> Opportunistically add a comment to explain why KVM cares about eventsel
> values for fixed counters.
> 
> Suggested-by: Jim Mattson <jmattson@...gle.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
>  arch/x86/kvm/vmx/pmu_intel.c | 72 ++++++++++++------------------------
>  1 file changed, 23 insertions(+), 49 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index 7737ee2fc62f..c4f2c6a268e7 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -22,52 +22,6 @@
>  
>  #define MSR_PMC_FULL_WIDTH_BIT      (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0)
>  
> -enum intel_pmu_architectural_events {
> -	/*
> -	 * The order of the architectural events matters as support for each
> -	 * event is enumerated via CPUID using the index of the event.
> -	 */
> -	INTEL_ARCH_CPU_CYCLES,
> -	INTEL_ARCH_INSTRUCTIONS_RETIRED,
> -	INTEL_ARCH_REFERENCE_CYCLES,
> -	INTEL_ARCH_LLC_REFERENCES,
> -	INTEL_ARCH_LLC_MISSES,
> -	INTEL_ARCH_BRANCHES_RETIRED,
> -	INTEL_ARCH_BRANCHES_MISPREDICTED,
> -
> -	NR_REAL_INTEL_ARCH_EVENTS,
> -
> -	/*
> -	 * Pseudo-architectural event used to implement IA32_FIXED_CTR2, a.k.a.
> -	 * TSC reference cycles.  The architectural reference cycles event may
> -	 * or may not actually use the TSC as the reference, e.g. might use the
> -	 * core crystal clock or the bus clock (yeah, "architectural").
> -	 */
> -	PSEUDO_ARCH_REFERENCE_CYCLES = NR_REAL_INTEL_ARCH_EVENTS,
> -	NR_INTEL_ARCH_EVENTS,
> -};
> -
> -static struct {
> -	u8 eventsel;
> -	u8 unit_mask;
> -} const intel_arch_events[] = {
> -	[INTEL_ARCH_CPU_CYCLES]			= { 0x3c, 0x00 },
> -	[INTEL_ARCH_INSTRUCTIONS_RETIRED]	= { 0xc0, 0x00 },
> -	[INTEL_ARCH_REFERENCE_CYCLES]		= { 0x3c, 0x01 },
> -	[INTEL_ARCH_LLC_REFERENCES]		= { 0x2e, 0x4f },
> -	[INTEL_ARCH_LLC_MISSES]			= { 0x2e, 0x41 },
> -	[INTEL_ARCH_BRANCHES_RETIRED]		= { 0xc4, 0x00 },
> -	[INTEL_ARCH_BRANCHES_MISPREDICTED]	= { 0xc5, 0x00 },
> -	[PSEUDO_ARCH_REFERENCE_CYCLES]		= { 0x00, 0x03 },
> -};
> -
> -/* mapping between fixed pmc index and intel_arch_events array */
> -static int fixed_pmc_events[] = {
> -	[0] = INTEL_ARCH_INSTRUCTIONS_RETIRED,
> -	[1] = INTEL_ARCH_CPU_CYCLES,
> -	[2] = PSEUDO_ARCH_REFERENCE_CYCLES,
> -};
> -
>  static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data)
>  {
>  	struct kvm_pmc *pmc;
> @@ -442,8 +396,29 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  	return 0;
>  }
>  
> +/*
> + * Map fixed counter events to architectural general purpose event encodings.
> + * Perf doesn't provide APIs to allow KVM to directly program a fixed counter,
> + * and so KVM instead programs the architectural event to effectively request
> + * the fixed counter.  Perf isn't guaranteed to use a fixed counter and may
> + * instead program the encoding into a general purpose counter, e.g. if a
> + * different perf_event is already utilizing the requested counter, but the end
> + * result is the same (ignoring the fact that using a general purpose counter
> + * will likely exacerbate counter contention).
> + *
> + * Note, reference cycles is counted using a perf-defined "psuedo-encoding",
> + * as there is no architectural general purpose encoding for reference cycles.

It's not the case for the latest Intel platforms anymore. Please see
ffbe4ab0beda ("perf/x86/intel: Extend the ref-cycles event to GP counters").

Maybe perf should export .event_map to KVM somehow.

Thanks,
Kan
> + */
>  static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu)
>  {
> +	const struct {
> +		u8 eventsel;
> +		u8 unit_mask;
> +	} fixed_pmc_events[] = {
> +		[0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */
> +		[1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */
> +		[2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/
> +	};
>  	int i;
>  
>  	BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED);
> @@ -451,10 +426,9 @@ static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu)
>  	for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
>  		int index = array_index_nospec(i, KVM_PMC_MAX_FIXED);
>  		struct kvm_pmc *pmc = &pmu->fixed_counters[index];
> -		u32 event = fixed_pmc_events[index];
>  
> -		pmc->eventsel = (intel_arch_events[event].unit_mask << 8) |
> -				 intel_arch_events[event].eventsel;
> +		pmc->eventsel = (fixed_pmc_events[index].unit_mask << 8) |
> +				 fixed_pmc_events[index].eventsel;
>  	}
>  }
>  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ