[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1264480000-6997-3-git-send-email-jason.wessel@windriver.com>
Date: Mon, 25 Jan 2010 22:26:38 -0600
From: Jason Wessel <jason.wessel@...driver.com>
To: linux-kernel@...r.kernel.org
Cc: kgdb-bugreport@...ts.sourceforge.net, mingo@...e.hu,
Jason Wessel <jason.wessel@...driver.com>,
Frederic Weisbecker <fweisbec@...il.com>,
"K.Prasad" <prasad@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Alan Stern <stern@...land.harvard.edu>
Subject: [PATCH 2/4] perf,hw_breakpoint: add lockless reservation for hw_breaks
The kernel debugger cannot take any locks at the risk of deadlocking
the system. This patch implements a simple reservation system using
an atomic variable initialized to the maximum number of system wide
breakpoints. Any time the variable is negative, there are no
remaining unreserved hw breakpoint slots.
The perf hw breakpoint API needs to keep the account correct for the
number of system wide breakpoints available at any given time. The
kernel debugger will use the same reservation semantics, but use the
low level API calls to install and remove breakpoints while general
kernel execution is paused.
CC: Frederic Weisbecker <fweisbec@...il.com>
CC: Ingo Molnar <mingo@...e.hu>
CC: K.Prasad <prasad@...ux.vnet.ibm.com>
CC: Peter Zijlstra <peterz@...radead.org>
CC: Alan Stern <stern@...land.harvard.edu>
Signed-off-by: Jason Wessel <jason.wessel@...driver.com>
---
arch/x86/kernel/kgdb.c | 12 +++++++++---
include/linux/perf_event.h | 1 +
kernel/hw_breakpoint.c | 16 ++++++++++++++++
3 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 3cb2828..2a31f35 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -251,6 +251,7 @@ kgdb_remove_hw_break(unsigned long addr, int len, enum kgdb_bptype bptype)
return -1;
breakinfo[i].enabled = 0;
+ atomic_inc(&dbg_slots_pinned);
return 0;
}
@@ -277,11 +278,13 @@ kgdb_set_hw_break(unsigned long addr, int len, enum kgdb_bptype bptype)
{
int i;
+ if (atomic_add_negative(-1, &dbg_slots_pinned))
+ goto err_out;
for (i = 0; i < 4; i++)
if (!breakinfo[i].enabled)
break;
if (i == 4)
- return -1;
+ goto err_out;
switch (bptype) {
case BP_HARDWARE_BREAKPOINT:
@@ -295,7 +298,7 @@ kgdb_set_hw_break(unsigned long addr, int len, enum kgdb_bptype bptype)
breakinfo[i].type = X86_BREAKPOINT_RW;
break;
default:
- return -1;
+ goto err_out;
}
switch (len) {
case 1:
@@ -313,12 +316,15 @@ kgdb_set_hw_break(unsigned long addr, int len, enum kgdb_bptype bptype)
break;
#endif
default:
- return -1;
+ goto err_out;
}
breakinfo[i].addr = addr;
breakinfo[i].enabled = 1;
return 0;
+err_out:
+ atomic_inc(&dbg_slots_pinned);
+ return -1;
}
/**
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 8fa7187..71f3f05 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -825,6 +825,7 @@ static inline int is_software_event(struct perf_event *event)
}
extern atomic_t perf_swevent_enabled[PERF_COUNT_SW_MAX];
+extern atomic_t dbg_slots_pinned;
extern void __perf_sw_event(u32, u64, int, struct pt_regs *, u64);
diff --git a/kernel/hw_breakpoint.c b/kernel/hw_breakpoint.c
index 50dbd59..ddf7951 100644
--- a/kernel/hw_breakpoint.c
+++ b/kernel/hw_breakpoint.c
@@ -55,6 +55,9 @@ static DEFINE_PER_CPU(unsigned int, nr_cpu_bp_pinned);
/* Number of pinned task breakpoints in a cpu */
static DEFINE_PER_CPU(unsigned int, nr_task_bp_pinned[HBP_NUM]);
+/* Slots pinned atomically by the debugger */
+atomic_t dbg_slots_pinned = ATOMIC_INIT(HBP_NUM);
+
/* Number of non-pinned cpu/task breakpoints in a cpu */
static DEFINE_PER_CPU(unsigned int, nr_bp_flexible);
@@ -249,12 +252,24 @@ int reserve_bp_slot(struct perf_event *bp)
int ret = 0;
mutex_lock(&nr_bp_mutex);
+ /*
+ * Grab a dbg_slots_pinned allocation. This atomic variable
+ * allows lockless sharing between the kernel debugger and the
+ * perf hw breakpoints. It represents the total number of
+ * available system wide breakpoints.
+ */
+ if (atomic_add_negative(-1, &dbg_slots_pinned)) {
+ atomic_inc(&dbg_slots_pinned);
+ ret = -ENOSPC;
+ goto end;
+ }
fetch_bp_busy_slots(&slots, bp);
/* Flexible counters need to keep at least one slot */
if (slots.pinned + (!!slots.flexible) == HBP_NUM) {
ret = -ENOSPC;
+ atomic_inc(&dbg_slots_pinned);
goto end;
}
@@ -271,6 +286,7 @@ void release_bp_slot(struct perf_event *bp)
mutex_lock(&nr_bp_mutex);
toggle_bp_slot(bp, false);
+ atomic_inc(&dbg_slots_pinned);
mutex_unlock(&nr_bp_mutex);
}
--
1.6.3.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists