[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YJ0/bjek1ihh/2Ea@hirez.programming.kicks-ass.net>
Date: Thu, 13 May 2021 17:02:06 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: kan.liang@...ux.intel.com
Cc: mingo@...hat.com, linux-kernel@...r.kernel.org, robh@...nel.org,
ak@...ux.intel.com, acme@...nel.org, mark.rutland@....com,
luto@...capital.net, eranian@...gle.com, namhyung@...nel.org
Subject: Re: [PATCH V7 2/2] perf/x86: Reset the dirty counter to prevent the
leak for an RDPMC task
On Thu, May 13, 2021 at 07:23:02AM -0700, kan.liang@...ux.intel.com wrote:
> + if (x86_pmu.sched_task && event->hw.target) {
> + atomic_inc(&event->pmu->sched_cb_usage);
> + local_irq_save(flags);
> + x86_pmu_clear_dirty_counters();
> + local_irq_restore(flags);
> + }
So what happens if our mmap() happens after we've already created two
(or more) threads in the process, all of who already have a counter (or
more) on?
Shouldn't this be something like?
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2474,7 +2474,7 @@ static int x86_pmu_event_init(struct per
return err;
}
-static void x86_pmu_clear_dirty_counters(void)
+static void x86_pmu_clear_dirty_counters(void *unused)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
int i;
@@ -2512,9 +2512,9 @@ static void x86_pmu_event_mapped(struct
*/
if (x86_pmu.sched_task && event->hw.target) {
atomic_inc(&event->pmu->sched_cb_usage);
- local_irq_save(flags);
- x86_pmu_clear_dirty_counters();
- local_irq_restore(flags);
+ on_each_cpu_mask(mm_cpumask(mm),
+ x86_pmu_clear_dirty_counters,
+ NULL, true);
}
/*
@@ -2653,7 +2653,7 @@ static void x86_pmu_sched_task(struct pe
*/
if (sched_in && ctx && READ_ONCE(x86_pmu.attr_rdpmc) &&
current->mm && atomic_read(¤t->mm->context.perf_rdpmc_allowed))
- x86_pmu_clear_dirty_counters();
+ x86_pmu_clear_dirty_counters(NULL);
}
static void x86_pmu_swap_task_ctx(struct perf_event_context *prev,
Powered by blists - more mailing lists