[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200908103736.GP1362448@hirez.programming.kicks-ass.net>
Date: Tue, 8 Sep 2020 12:37:36 +0200
From: peterz@...radead.org
To: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
Eddy_Wu@...ndmicro.com, x86@...nel.org, davem@...emloft.net,
rostedt@...dmis.org, naveen.n.rao@...ux.ibm.com,
anil.s.keshavamurthy@...el.com, linux-arch@...r.kernel.org,
cameron@...dycamel.com, oleg@...hat.com, will@...nel.org,
paulmck@...nel.org, systemtap@...rceware.org
Subject: Re: [PATCH v5 00/21] kprobes: Unify kretprobe trampoline handlers
and make kretprobe lockless
On Thu, Sep 03, 2020 at 10:39:54AM +0900, Masami Hiramatsu wrote:
> > There's a bug, that might make it miss it. I have a patch. I'll send it
> > shortly.
>
> OK, I've confirmed that the lockdep warns on kretprobe from INT3
> with your fix.
I'm now trying and failing to reproduce.... I can't seem to make it use
int3 today. It seems to want to use ftrace or refuses everything. I'm
probably doing it wrong.
Could you verify the below actually works? It's on top of the first 16
patches.
> Of course make it lockless then warning is gone.
> But even without the lockless patch, this warning can be false-positive
> because we prohibit nested kprobe call, right?
Yes, because the actual nesting is avoided by kprobe_busy, but lockdep
can't tell. Lockdep sees a regular lock user and an in-nmi lock user and
figures that's a bad combination.
---
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1241,48 +1241,47 @@ void recycle_rp_inst(struct kretprobe_in
}
NOKPROBE_SYMBOL(recycle_rp_inst);
-void kretprobe_hash_lock(struct task_struct *tsk,
- struct hlist_head **head, unsigned long *flags)
-__acquires(hlist_lock)
-{
- unsigned long hash = hash_ptr(tsk, KPROBE_HASH_BITS);
- raw_spinlock_t *hlist_lock;
-
- *head = &kretprobe_inst_table[hash];
- hlist_lock = kretprobe_table_lock_ptr(hash);
- raw_spin_lock_irqsave(hlist_lock, *flags);
-}
-NOKPROBE_SYMBOL(kretprobe_hash_lock);
-
static void kretprobe_table_lock(unsigned long hash,
unsigned long *flags)
__acquires(hlist_lock)
{
raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
- raw_spin_lock_irqsave(hlist_lock, *flags);
+ /*
+ * HACK, due to kprobe_busy we'll never actually recurse, make NMI
+ * context use a different lock class to avoid it reporting.
+ */
+ raw_spin_lock_irqsave_nested(hlist_lock, *flags, !!in_nmi());
}
NOKPROBE_SYMBOL(kretprobe_table_lock);
-void kretprobe_hash_unlock(struct task_struct *tsk,
- unsigned long *flags)
+static void kretprobe_table_unlock(unsigned long hash,
+ unsigned long *flags)
__releases(hlist_lock)
{
+ raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
+ raw_spin_unlock_irqrestore(hlist_lock, *flags);
+}
+NOKPROBE_SYMBOL(kretprobe_table_unlock);
+
+void kretprobe_hash_lock(struct task_struct *tsk,
+ struct hlist_head **head, unsigned long *flags)
+__acquires(hlist_lock)
+{
unsigned long hash = hash_ptr(tsk, KPROBE_HASH_BITS);
- raw_spinlock_t *hlist_lock;
- hlist_lock = kretprobe_table_lock_ptr(hash);
- raw_spin_unlock_irqrestore(hlist_lock, *flags);
+ *head = &kretprobe_inst_table[hash];
+ kretprobe_table_lock(hash, flags);
}
-NOKPROBE_SYMBOL(kretprobe_hash_unlock);
+NOKPROBE_SYMBOL(kretprobe_hash_lock);
-static void kretprobe_table_unlock(unsigned long hash,
- unsigned long *flags)
+void kretprobe_hash_unlock(struct task_struct *tsk,
+ unsigned long *flags)
__releases(hlist_lock)
{
- raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
- raw_spin_unlock_irqrestore(hlist_lock, *flags);
+ unsigned long hash = hash_ptr(tsk, KPROBE_HASH_BITS);
+ kretprobe_table_unlock(hash, flags);
}
-NOKPROBE_SYMBOL(kretprobe_table_unlock);
+NOKPROBE_SYMBOL(kretprobe_hash_unlock);
struct kprobe kprobe_busy = {
.addr = (void *) get_kprobe,
Powered by blists - more mailing lists