[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190521214055.31060-2-kan.liang@linux.intel.com>
Date: Tue, 21 May 2019 14:40:47 -0700
From: kan.liang@...ux.intel.com
To: peterz@...radead.org, acme@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, jolsa@...nel.org, eranian@...gle.com,
alexander.shishkin@...ux.intel.com, ak@...ux.intel.com,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH 1/9] perf/core: Support a REMOVE transaction
From: Andi Kleen <ak@...ux.intel.com>
The TopDown events can be collected per thread/process on Icelake. To
use TopDown through RDPMC in applications, the metrics and slots MSR
values have to be saved/restored during context switching.
It is useful to have a remove transaction when the counter is
unscheduled, so that the values can be saved correctly.
Add a remove transaction to the perf core.
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---
arch/x86/events/core.c | 3 +--
include/linux/perf_event.h | 1 +
kernel/events/core.c | 5 +++++
3 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index f0e4804515d8..e075de494dfd 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1856,8 +1856,7 @@ static inline void x86_pmu_read(struct perf_event *event)
* Set the flag to make pmu::enable() not perform the
* schedulability test, it will be performed at commit time
*
- * We only support PERF_PMU_TXN_ADD transactions. Save the
- * transaction flags but otherwise ignore non-PERF_PMU_TXN_ADD
+ * Save the transaction flags and ignore non-PERF_PMU_TXN_ADD
* transactions.
*/
static void x86_pmu_start_txn(struct pmu *pmu, unsigned int txn_flags)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 5beb5cde3d56..973b7f8ce8e9 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -233,6 +233,7 @@ struct perf_event;
*/
#define PERF_PMU_TXN_ADD 0x1 /* txn to add/schedule event on PMU */
#define PERF_PMU_TXN_READ 0x2 /* txn to read event group from PMU */
+#define PERF_PMU_TXN_REMOVE 0x4 /* txn to remove event on PMU */
/**
* pmu::capabilities flags
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 118ad1aef6af..f204166f6bc8 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2032,6 +2032,7 @@ group_sched_out(struct perf_event *group_event,
struct perf_cpu_context *cpuctx,
struct perf_event_context *ctx)
{
+ struct pmu *pmu = ctx->pmu;
struct perf_event *event;
if (group_event->state != PERF_EVENT_STATE_ACTIVE)
@@ -2039,6 +2040,8 @@ group_sched_out(struct perf_event *group_event,
perf_pmu_disable(ctx->pmu);
+ pmu->start_txn(pmu, PERF_PMU_TXN_REMOVE);
+
event_sched_out(group_event, cpuctx, ctx);
/*
@@ -2051,6 +2054,8 @@ group_sched_out(struct perf_event *group_event,
if (group_event->attr.exclusive)
cpuctx->exclusive = 0;
+
+ pmu->commit_txn(pmu);
}
#define DETACH_GROUP 0x01UL
--
2.14.5
Powered by blists - more mailing lists