[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100915162034.GO13563@erda.amd.com>
Date: Wed, 15 Sep 2010 18:20:34 +0200
From: Robert Richter <robert.richter@....com>
To: Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>
CC: Don Zickus <dzickus@...hat.com>,
"gorcunov@...il.com" <gorcunov@...il.com>,
"fweisbec@...il.com" <fweisbec@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"ying.huang@...el.com" <ying.huang@...el.com>,
"ming.m.lin@...el.com" <ming.m.lin@...el.com>,
"yinghai@...nel.org" <yinghai@...nel.org>,
"andi@...stfloor.org" <andi@...stfloor.org>,
"eranian@...gle.com" <eranian@...gle.com>
Subject: [PATCH] perf, x86: catch spurious interrupts after disabling
counters
On 14.09.10 19:41:32, Robert Richter wrote:
> I found the reason why we get the unknown nmi. For some reason
> cpuc->active_mask in x86_pmu_handle_irq() is zero. Thus, no counters
> are handled when we get an nmi. It seems there is somewhere a race
> accessing the active_mask. So far I don't have a fix available.
> Changing x86_pmu_stop() did not help:
The patch below for tip/perf/urgent fixes this.
-Robert
>From 4206a086f5b37efc1b4d94f1d90b55802b299ca0 Mon Sep 17 00:00:00 2001
From: Robert Richter <robert.richter@....com>
Date: Wed, 15 Sep 2010 16:12:59 +0200
Subject: [PATCH] perf, x86: catch spurious interrupts after disabling counters
Some cpus still deliver spurious interrupts after disabling a counter.
This caused 'undelivered NMI' messages. This patch fixes this.
Signed-off-by: Robert Richter <robert.richter@....com>
---
arch/x86/kernel/cpu/perf_event.c | 13 ++++++++++++-
1 files changed, 12 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 3efdf28..df7aabd 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -102,6 +102,7 @@ struct cpu_hw_events {
*/
struct perf_event *events[X86_PMC_IDX_MAX]; /* in counter order */
unsigned long active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
+ unsigned long running[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
int enabled;
int n_events;
@@ -1010,6 +1011,7 @@ static int x86_pmu_start(struct perf_event *event)
x86_perf_event_set_period(event);
cpuc->events[idx] = event;
__set_bit(idx, cpuc->active_mask);
+ __set_bit(idx, cpuc->running);
x86_pmu.enable(event);
perf_event_update_userpage(event);
@@ -1141,8 +1143,17 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
cpuc = &__get_cpu_var(cpu_hw_events);
for (idx = 0; idx < x86_pmu.num_counters; idx++) {
- if (!test_bit(idx, cpuc->active_mask))
+ if (!test_bit(idx, cpuc->active_mask)) {
+ if (__test_and_clear_bit(idx, cpuc->running))
+ /*
+ * Though we deactivated the counter
+ * some cpus might still deliver
+ * spurious interrupts. Catching them
+ * here.
+ */
+ handled++;
continue;
+ }
event = cpuc->events[idx];
hwc = &event->hw;
--
1.7.2.2
--
Advanced Micro Devices, Inc.
Operating System Research Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists