[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <03fff406-3050-57dc-1f17-0f5630e810af@linux.intel.com>
Date: Wed, 12 May 2021 10:09:59 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Rob Herring <robh@...nel.org>, Ingo Molnar <mingo@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andi Kleen <ak@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Andy Lutomirski <luto@...capital.net>,
Stephane Eranian <eranian@...gle.com>,
Namhyung Kim <namhyung@...nel.org>
Subject: Re: [PATCH V6] perf: Reset the dirty counter to prevent the leak for
an RDPMC task
On 5/12/2021 3:35 AM, Peter Zijlstra wrote:
> On Tue, May 11, 2021 at 05:42:54PM -0400, Liang, Kan wrote:
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 1574b70..8216acc 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -3851,7 +3851,7 @@ static void perf_event_context_sched_in(struct
>> perf_event_context *ctx,
>> cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
>> perf_event_sched_in(cpuctx, ctx, task);
>>
>> - if (cpuctx->sched_cb_usage && pmu->sched_task)
>> + if (pmu->sched_task && (cpuctx->sched_cb_usage ||
>> atomic_read(&pmu->sched_cb_usages)))
>> pmu->sched_task(cpuctx->task_ctx, true);
>
> Aside from the obvious whitespace issues; I think this should work.
>
Thanks. The whitespace should be caused by the copy/paste. I will fix it
in the V7.
I did more tests. For some cases, I can still observe the dirty counter
for the first RDPMC read. I think we still have to clear the dirty
counters in the x86_pmu_event_mapped() for the first RDPMC read.
I have to disable the the interrupts to prevent the preemption.
static void x86_pmu_event_mapped(struct perf_event *event, struct
mm_struct *mm)
{
+ unsigned long flags;
+
if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
return;
/*
+ * Enable sched_task() for the RDPMC task,
+ * and clear the existing dirty counters.
+ */
+ if (x86_pmu.sched_task && event->hw.target) {
+ atomic_inc(&event->pmu->sched_cb_usages);
+ local_irq_save(flags);
+ x86_pmu_clear_dirty_counters();
+ local_irq_restore(flags);
+ }
+
+ /*
* This function relies on not being called concurrently in two
* tasks in the same mm. Otherwise one task could observe
* perf_rdpmc_allowed > 1 and return all the way back to
Thanks,
Kan
Powered by blists - more mailing lists