[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211112095139.21775-3-likexu@tencent.com>
Date: Fri, 12 Nov 2021 17:51:34 +0800
From: Like Xu <like.xu.linux@...il.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Like Xu <likexu@...cent.com>
Subject: [PATCH 2/7] KVM: x86/pmu: Fix available_event_types check for REF_CPU_CYCLES event
From: Like Xu <likexu@...cent.com>
For the CPUID 0x0A.EBX bit vector, the [7] event should be the Intel
unrealized architectural performance events "Topdown Slots" instead
of the *kernel* generalized common hardware event "REF_CPU_CYCLES", so
we can skip the cpuid unavaliblity check in the intel_find_arch_event()
for the last REF_CPU_CYCLES event and update the confusing comment.
Fixes: 62079d8a43128 ("KVM: PMU: add proper support for fixed counter 2")
Signed-off-by: Like Xu <likexu@...cent.com>
---
arch/x86/kvm/vmx/pmu_intel.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index b8e0d21b7c8a..bc6845265362 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -21,7 +21,6 @@
#define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0)
static struct kvm_event_hw_type_mapping intel_arch_events[] = {
- /* Index must match CPUID 0x0A.EBX bit vector */
[0] = { 0x3c, 0x00, PERF_COUNT_HW_CPU_CYCLES },
[1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS },
[2] = { 0x3c, 0x01, PERF_COUNT_HW_BUS_CYCLES },
@@ -29,6 +28,7 @@ static struct kvm_event_hw_type_mapping intel_arch_events[] = {
[4] = { 0x2e, 0x41, PERF_COUNT_HW_CACHE_MISSES },
[5] = { 0xc4, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
[6] = { 0xc5, 0x00, PERF_COUNT_HW_BRANCH_MISSES },
+ /* The above index must match CPUID 0x0A.EBX bit vector */
[7] = { 0x00, 0x03, PERF_COUNT_HW_REF_CPU_CYCLES },
};
@@ -75,9 +75,9 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu,
int i;
for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++)
- if (intel_arch_events[i].eventsel == event_select
- && intel_arch_events[i].unit_mask == unit_mask
- && (pmu->available_event_types & (1 << i)))
+ if (intel_arch_events[i].eventsel == event_select &&
+ intel_arch_events[i].unit_mask == unit_mask &&
+ ((i > 6) || pmu->available_event_types & (1 << i)))
break;
if (i == ARRAY_SIZE(intel_arch_events))
--
2.33.0
Powered by blists - more mailing lists