lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20220302111334.12689-9-likexu@tencent.com> Date: Wed, 2 Mar 2022 19:13:30 +0800 From: Like Xu <like.xu.linux@...il.com> To: Paolo Bonzini <pbonzini@...hat.com> Cc: Jim Mattson <jmattson@...gle.com>, kvm@...r.kernel.org, Sean Christopherson <seanjc@...gle.com>, Wanpeng Li <wanpengli@...cent.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Joerg Roedel <joro@...tes.org>, linux-kernel@...r.kernel.org, Like Xu <likexu@...cent.com>, Peter Zijlstra <peterz@...radead.org> Subject: [PATCH v2 08/12] perf: x86/core: Add interface to query perfmon_event_map[] directly From: Like Xu <likexu@...cent.com> Currently, we have [intel|knc|p4|p6]_perfmon_event_map on the Intel platforms and amd_[f17h]_perfmon_event_map on the AMD platforms. Early clumsy KVM code or other potential perf_event users may have hard-coded these perfmon_maps (e.g., arch/x86/kvm/svm/pmu.c), so it would not make sense to program a common hardware event based on the generic "enum perf_hw_id" once the two tables do not match. Let's provide an interface for callers outside the perf subsystem to get the counter config based on the perfmon_event_map currently in use, and it also helps to save bytes. Cc: Peter Zijlstra <peterz@...radead.org> Signed-off-by: Like Xu <likexu@...cent.com> --- arch/x86/events/core.c | 11 +++++++++++ arch/x86/include/asm/perf_event.h | 6 ++++++ 2 files changed, 17 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index e686c5e0537b..e760a1348c62 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2996,3 +2996,14 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) cap->events_mask_len = x86_pmu.events_mask_len; } EXPORT_SYMBOL_GPL(perf_get_x86_pmu_capability); + +u64 perf_get_hw_event_config(int hw_event) +{ + int max = x86_pmu.max_events; + + if (hw_event < max) + return x86_pmu.event_map(array_index_nospec(hw_event, max)); + + return 0; +} +EXPORT_SYMBOL_GPL(perf_get_hw_event_config); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8fc1b5003713..822927045406 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -477,6 +477,7 @@ struct x86_pmu_lbr { }; extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap); +extern u64 perf_get_hw_event_config(int hw_event); extern void perf_check_microcode(void); extern void perf_clear_dirty_counters(void); extern int x86_perf_rdpmc_index(struct perf_event *event); @@ -486,6 +487,11 @@ static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) memset(cap, 0, sizeof(*cap)); } +static inline u64 perf_get_hw_event_config(int hw_event) +{ + return 0; +} + static inline void perf_events_lapic_init(void) { } static inline void perf_check_microcode(void) { } #endif -- 2.35.1
Powered by blists - more mailing lists