[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-8a7b8cb91f26a671f22cedc7fd54508667f2d9b9@git.kernel.org>
Date: Tue, 26 May 2009 07:45:38 GMT
From: tip-bot for Paul Mackerras <paulus@...ba.org>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, paulus@...ba.org, hpa@...or.com,
mingo@...hat.com, a.p.zijlstra@...llo.nl, tglx@...utronix.de,
cjashfor@...ux.vnet.ibm.com, mingo@...e.hu
Subject: [tip:perfcounters/core] perf_counter: powerpc: Implement interrupt throttling
Commit-ID: 8a7b8cb91f26a671f22cedc7fd54508667f2d9b9
Gitweb: http://git.kernel.org/tip/8a7b8cb91f26a671f22cedc7fd54508667f2d9b9
Author: Paul Mackerras <paulus@...ba.org>
AuthorDate: Tue, 26 May 2009 16:27:59 +1000
Committer: Ingo Molnar <mingo@...e.hu>
CommitDate: Tue, 26 May 2009 09:43:59 +0200
perf_counter: powerpc: Implement interrupt throttling
This implements interrupt throttling on powerpc. Since we don't have
individual count enable/disable or interrupt enable/disable controls
per counter, this simply sets the hardware counter to 0, meaning that
it will not interrupt again until it has counted 2^31 counts, which
will take at least 2^30 cycles assuming a maximum of 2 counts per
cycle. Also, we set counter->hw.period_left to the maximum possible
value (2^63 - 1), so we won't report overflows for this counter for
the forseeable future.
The unthrottle operation restores counter->hw.period_left and the
hardware counter so that we will once again report a counter overflow
after counter->hw.irq_period counts.
[ Impact: new perfcounters robustness feature on PowerPC ]
Signed-off-by: Paul Mackerras <paulus@...ba.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Corey Ashford <cjashfor@...ux.vnet.ibm.com>
LKML-Reference: <18971.35823.643362.446774@...go.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
arch/powerpc/kernel/perf_counter.c | 48 ++++++++++++++++++++++++++++++++---
1 files changed, 43 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/perf_counter.c b/arch/powerpc/kernel/perf_counter.c
index fe21b24..f96d55f 100644
--- a/arch/powerpc/kernel/perf_counter.c
+++ b/arch/powerpc/kernel/perf_counter.c
@@ -740,10 +740,37 @@ static void power_pmu_disable(struct perf_counter *counter)
local_irq_restore(flags);
}
+/*
+ * Re-enable interrupts on a counter after they were throttled
+ * because they were coming too fast.
+ */
+static void power_pmu_unthrottle(struct perf_counter *counter)
+{
+ s64 val, left;
+ unsigned long flags;
+
+ if (!counter->hw.idx || !counter->hw.irq_period)
+ return;
+ local_irq_save(flags);
+ perf_disable();
+ power_pmu_read(counter);
+ left = counter->hw.irq_period;
+ val = 0;
+ if (left < 0x80000000L)
+ val = 0x80000000L - left;
+ write_pmc(counter->hw.idx, val);
+ atomic64_set(&counter->hw.prev_count, val);
+ atomic64_set(&counter->hw.period_left, left);
+ perf_counter_update_userpage(counter);
+ perf_enable();
+ local_irq_restore(flags);
+}
+
struct pmu power_pmu = {
.enable = power_pmu_enable,
.disable = power_pmu_disable,
.read = power_pmu_read,
+ .unthrottle = power_pmu_unthrottle,
};
/*
@@ -957,10 +984,6 @@ static void record_and_restart(struct perf_counter *counter, long val,
if (left < 0x80000000L)
val = 0x80000000L - left;
}
- write_pmc(counter->hw.idx, val);
- atomic64_set(&counter->hw.prev_count, val);
- atomic64_set(&counter->hw.period_left, left);
- perf_counter_update_userpage(counter);
/*
* Finally record data if requested.
@@ -983,8 +1006,23 @@ static void record_and_restart(struct perf_counter *counter, long val,
if (!(mmcra & MMCRA_SAMPLE_ENABLE) || (mmcra & sdsync))
addr = mfspr(SPRN_SDAR);
}
- perf_counter_overflow(counter, nmi, regs, addr);
+ if (perf_counter_overflow(counter, nmi, regs, addr)) {
+ /*
+ * Interrupts are coming too fast - throttle them
+ * by setting the counter to 0, so it will be
+ * at least 2^30 cycles until the next interrupt
+ * (assuming each counter counts at most 2 counts
+ * per cycle).
+ */
+ val = 0;
+ left = ~0ULL >> 1;
+ }
}
+
+ write_pmc(counter->hw.idx, val);
+ atomic64_set(&counter->hw.prev_count, val);
+ atomic64_set(&counter->hw.period_left, left);
+ perf_counter_update_userpage(counter);
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists