[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201109173300.GM2611@hirez.programming.kicks-ass.net>
Date: Mon, 9 Nov 2020 18:33:00 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Liang, Kan" <kan.liang@...ux.intel.com>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
namhyung@...nel.org, eranian@...gle.com, irogers@...gle.com,
gmx@...gle.com, acme@...nel.org, jolsa@...hat.com,
ak@...ux.intel.com
Subject: Re: [PATCH 1/3] perf/core: Flush PMU internal buffers for per-CPU
events
On Mon, Nov 09, 2020 at 09:49:31AM -0500, Liang, Kan wrote:
> > Maybe we can frob x86_pmu_enable()...
>
> Could you please elaborate?
Something horrible like this. It will detect the first time we enable
the PMU on a new task (IOW we did a context switch) and wipe the
counters when user RDPMC is on...
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 77b963e5e70a..d862927baaef 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1289,6 +1289,15 @@ static void x86_pmu_enable(struct pmu *pmu)
perf_events_lapic_init();
}
+ if (cpuc->current != current) {
+ struct mm_struct *mm = current->mm;
+
+ cpuc->current = current;
+
+ if (mm && atomic_read(&mm->context.perf_rdpmc_allowed))
+ wipe_dirty_counters();
+ }
+
cpuc->enabled = 1;
barrier();
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 7895cf4c59a7..d16118cb3bd0 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -248,6 +248,8 @@ struct cpu_hw_events {
unsigned int txn_flags;
int is_fake;
+ void *current;
+
/*
* Intel DebugStore bits
*/
Powered by blists - more mailing lists