[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-dce86ac75d772047e9bc606154704aa73bfd4c83@git.kernel.org>
Date: Tue, 25 Jun 2019 01:21:26 -0700
From: tip-bot for Kan Liang <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: tglx@...utronix.de, alexander.shishkin@...ux.intel.com,
eranian@...gle.com, torvalds@...ux-foundation.org, acme@...hat.com,
linux-kernel@...r.kernel.org, kan.liang@...ux.intel.com,
mingo@...nel.org, vincent.weaver@...ne.edu, jolsa@...hat.com,
hpa@...or.com, peterz@...radead.org
Subject: [tip:perf/urgent] perf/x86: Clean up PEBS_XMM_REGS
Commit-ID: dce86ac75d772047e9bc606154704aa73bfd4c83
Gitweb: https://git.kernel.org/tip/dce86ac75d772047e9bc606154704aa73bfd4c83
Author: Kan Liang <kan.liang@...ux.intel.com>
AuthorDate: Tue, 28 May 2019 15:08:32 -0700
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 24 Jun 2019 19:19:24 +0200
perf/x86: Clean up PEBS_XMM_REGS
Use generic macro PERF_REG_EXTENDED_MASK to replace PEBS_XMM_REGS to
avoid duplication.
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Stephane Eranian <eranian@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Vince Weaver <vincent.weaver@...ne.edu>
Link: https://lkml.kernel.org/r/1559081314-9714-3-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/events/core.c | 4 ++--
arch/x86/events/intel/ds.c | 2 +-
arch/x86/events/perf_event.h | 18 ------------------
3 files changed, 3 insertions(+), 21 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index f315425d8468..7708a6fb5f4a 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -561,13 +561,13 @@ int x86_pmu_hw_config(struct perf_event *event)
}
/* sample_regs_user never support XMM registers */
- if (unlikely(event->attr.sample_regs_user & PEBS_XMM_REGS))
+ if (unlikely(event->attr.sample_regs_user & PERF_REG_EXTENDED_MASK))
return -EINVAL;
/*
* Besides the general purpose registers, XMM registers may
* be collected in PEBS on some platforms, e.g. Icelake
*/
- if (unlikely(event->attr.sample_regs_intr & PEBS_XMM_REGS)) {
+ if (unlikely(event->attr.sample_regs_intr & PERF_REG_EXTENDED_MASK)) {
if (x86_pmu.pebs_no_xmm_regs)
return -EINVAL;
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 6cb38ab02c8a..955b2c688f23 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -987,7 +987,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
pebs_data_cfg |= PEBS_DATACFG_GP;
if ((sample_type & PERF_SAMPLE_REGS_INTR) &&
- (attr->sample_regs_intr & PEBS_XMM_REGS))
+ (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK))
pebs_data_cfg |= PEBS_DATACFG_XMMS;
if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index a6ac2f4f76fc..d3b6e90c80d3 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -121,24 +121,6 @@ struct amd_nb {
(1ULL << PERF_REG_X86_R14) | \
(1ULL << PERF_REG_X86_R15))
-#define PEBS_XMM_REGS \
- ((1ULL << PERF_REG_X86_XMM0) | \
- (1ULL << PERF_REG_X86_XMM1) | \
- (1ULL << PERF_REG_X86_XMM2) | \
- (1ULL << PERF_REG_X86_XMM3) | \
- (1ULL << PERF_REG_X86_XMM4) | \
- (1ULL << PERF_REG_X86_XMM5) | \
- (1ULL << PERF_REG_X86_XMM6) | \
- (1ULL << PERF_REG_X86_XMM7) | \
- (1ULL << PERF_REG_X86_XMM8) | \
- (1ULL << PERF_REG_X86_XMM9) | \
- (1ULL << PERF_REG_X86_XMM10) | \
- (1ULL << PERF_REG_X86_XMM11) | \
- (1ULL << PERF_REG_X86_XMM12) | \
- (1ULL << PERF_REG_X86_XMM13) | \
- (1ULL << PERF_REG_X86_XMM14) | \
- (1ULL << PERF_REG_X86_XMM15))
-
/*
* Per register state.
*/
Powered by blists - more mailing lists