[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1351268936-2956-1-git-send-email-fweisbec@gmail.com>
Date: Fri, 26 Oct 2012 18:28:56 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Michael Neuling <mikey@...ling.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Michael Ellerman <michael@...erman.id.au>,
Jovi Zhang <bookjovi@...il.com>,
K Prasad <prasad@...ux.vnet.ibm.com>,
Frederic Weisbecker <fweisbec@...il.com>
Subject: [PATCH] powerpc/perf: hw breakpoints return ENOSPC
From: Michael Neuling <mikey@...ling.org>
I've been trying to get hardware breakpoints with perf to work on POWER7
but I'm getting the following:
% perf record -e mem:0x10000000 true
Error: sys_perf_event_open() syscall returned with 28 (No space left on device). /bin/dmesg may provide additional information.
Fatal: No CONFIG_PERF_EVENTS=y kernel support configured?
true: Terminated
(FWIW adding -a and it works fine)
Debugging it seems that __reserve_bp_slot() is returning ENOSPC because
it thinks there are no free breakpoint slots on this CPU.
I have a 2 CPUs, so perf userspace is doing two perf_event_open syscalls
to add a counter to each CPU [1]. The first syscall succeeds but the
second is failing.
On this second syscall, fetch_bp_busy_slots() sets slots.pinned to be 1,
despite there being no breakpoint on this CPU. This is because the call
the task_bp_pinned, checks all CPUs, rather than just the current CPU.
POWER7 only has one hardware breakpoint per CPU (ie. HBP_NUM=1), so we
return ENOSPC.
The following patch fixes this by checking the associated CPU for each
breakpoint in task_bp_pinned. I'm not familiar with this code, so it's
provided as a reference to the above issue.
Signed-off-by: Michael Neuling <mikey@...ling.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Michael Ellerman <michael@...erman.id.au>
Cc: Jovi Zhang <bookjovi@...il.com>
Cc: K Prasad <prasad@...ux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/28857.1345091034@neuling.org
Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
---
kernel/events/hw_breakpoint.c | 12 +++++++-----
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
index 9a7b487..fe8a916 100644
--- a/kernel/events/hw_breakpoint.c
+++ b/kernel/events/hw_breakpoint.c
@@ -111,14 +111,16 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type)
* Count the number of breakpoints of the same type and same task.
* The given event must be not on the list.
*/
-static int task_bp_pinned(struct perf_event *bp, enum bp_type_idx type)
+static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type)
{
struct task_struct *tsk = bp->hw.bp_target;
struct perf_event *iter;
int count = 0;
list_for_each_entry(iter, &bp_task_head, hw.bp_list) {
- if (iter->hw.bp_target == tsk && find_slot_idx(iter) == type)
+ if (iter->hw.bp_target == tsk &&
+ find_slot_idx(iter) == type &&
+ cpu == iter->cpu)
count += hw_breakpoint_weight(iter);
}
@@ -141,7 +143,7 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp,
if (!tsk)
slots->pinned += max_task_bp_pinned(cpu, type);
else
- slots->pinned += task_bp_pinned(bp, type);
+ slots->pinned += task_bp_pinned(cpu, bp, type);
slots->flexible = per_cpu(nr_bp_flexible[type], cpu);
return;
@@ -154,7 +156,7 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp,
if (!tsk)
nr += max_task_bp_pinned(cpu, type);
else
- nr += task_bp_pinned(bp, type);
+ nr += task_bp_pinned(cpu, bp, type);
if (nr > slots->pinned)
slots->pinned = nr;
@@ -188,7 +190,7 @@ static void toggle_bp_task_slot(struct perf_event *bp, int cpu, bool enable,
int old_idx = 0;
int idx = 0;
- old_count = task_bp_pinned(bp, type);
+ old_count = task_bp_pinned(cpu, bp, type);
old_idx = old_count - 1;
idx = old_idx + weight;
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists