[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <161891560757.29796.6555570837621149245.tip-bot2@tip-bot2>
Date: Tue, 20 Apr 2021 10:46:47 -0000
From: "tip-bot2 for Kan Liang" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>,
Kan Liang <kan.liang@...ux.intel.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: perf/core] perf/x86: Hybrid PMU support for unconstrained
The following commit has been merged into the perf/core branch of tip:
Commit-ID: eaacf07d1116f6bf3b93b265515fccf2301097f2
Gitweb: https://git.kernel.org/tip/eaacf07d1116f6bf3b93b265515fccf2301097f2
Author: Kan Liang <kan.liang@...ux.intel.com>
AuthorDate: Mon, 12 Apr 2021 07:30:47 -07:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Mon, 19 Apr 2021 20:03:25 +02:00
perf/x86: Hybrid PMU support for unconstrained
The unconstrained value depends on the number of GP and fixed counters.
Each hybrid PMU should use its own unconstrained.
Suggested-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/1618237865-33448-8-git-send-email-kan.liang@linux.intel.com
---
arch/x86/events/intel/core.c | 2 +-
arch/x86/events/perf_event.h | 11 +++++++++++
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 3ea0126..4cfc382 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3147,7 +3147,7 @@ x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
}
}
- return &unconstrained;
+ return &hybrid_var(cpuc->pmu, unconstrained);
}
static struct event_constraint *
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 0539ad4..2688e45 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -638,6 +638,7 @@ struct x86_hybrid_pmu {
int max_pebs_events;
int num_counters;
int num_counters_fixed;
+ struct event_constraint unconstrained;
};
static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
@@ -658,6 +659,16 @@ extern struct static_key_false perf_is_hybrid;
__Fp; \
}))
+#define hybrid_var(_pmu, _var) \
+(*({ \
+ typeof(&_var) __Fp = &_var; \
+ \
+ if (is_hybrid() && (_pmu)) \
+ __Fp = &hybrid_pmu(_pmu)->_var; \
+ \
+ __Fp; \
+}))
+
/*
* struct x86_pmu - generic x86 pmu
*/
Powered by blists - more mailing lists