[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151201141323.GB14230@pathway.suse.cz>
Date: Tue, 1 Dec 2015 15:13:24 +0100
From: Petr Mladek <pmladek@...e.com>
To: Jiri Slaby <jslaby@...e.cz>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>,
Li Bin <huawei.libin@...wei.com>, sjenning@...hat.com,
jikos@...nel.org, vojtech@...e.com, live-patching@...r.kernel.org,
linux-kernel@...r.kernel.org, guohanjun@...wei.com,
dingtianhong@...wei.com, xiexiuqi@...wei.com,
zhouchengming1@...wei.com, Miroslav Benes <mbenes@...e.cz>
Subject: Re: [PATCH] livepatch: fix race between enabled_store() and
klp_unregister_patch()
On Tue 2015-12-01 09:50:23, Jiri Slaby wrote:
> On 12/01/2015, 02:11 AM, Josh Poimboeuf wrote:
> > When I try to recreate something similar by putting a delay in
> > enabled_store(), klp_free_patch() just sleeps on its call to
> > kobject_put() until enabled_store() returns. The unregister stack looks
> > like:
> >
> > [<ffffffff812e966b>] __kernfs_remove+0x1fb/0x380
> > [<ffffffff812ea273>] kernfs_remove+0x23/0x40
> > [<ffffffff812ec601>] sysfs_remove_dir+0x51/0x80
> > [<ffffffff81407fb8>] kobject_del+0x18/0x50
> > [<ffffffff8140804a>] kobject_release+0x5a/0x190
> > [<ffffffff81407f27>] kobject_put+0x27/0x50
>
> What about _put outside of klp_mutex in klp_unregister_patch (and maybe
> the other _put's as well)? Plus Li Bin's patch.
This might work. But I am pretty sure that we would need to put also
all the other kobject_puts outside of the lock.
I wondered how the approach with mutex_trylock() would look like
and got the patch below.
It is not trivial but it still might be easier than moving all
the kobject_put() calls. Also it should be easier to review
because all the logic is on a single place.
What do you think?
>From 3eaec912f2700cd2a886ada1e7b4361ae192ef25 Mon Sep 17 00:00:00 2001
From: Petr Mladek <pmladek@...e.com>
Date: Tue, 1 Dec 2015 13:25:34 +0100
Subject: [PATCH] livepatch: Avoid deadlock when unregistering and enabling a
patch
There is a possible deadlock between kobject_put() calls and
enabled_store(), see the lockdep report below.
A solution would be to put all kobject without the klp_mutex
and check if the patch is registered in enabled_store().
But this would make the unregister/free code even more
scattered.
This patch takes the other possible approach. It uses trylock
in enabled_store(). It the lock is not available and the patch
is not registered, it is probably being removed. Anyway, there
is nothing to do and enable_store() returns -EINVAL.
If the lock is not available and the patch is registered,
it tries harder to get it. It uses mutex_is_locked()
in the busy loop to avoid the cache bouncing.
Lockdep report:
[ 69.512196] ======================================================
[ 69.513139] [ INFO: possible circular locking dependency detected ]
[ 69.513437] 4.4.0-rc3-4-default+ #2079 Tainted: G W E K
[ 69.513437] -------------------------------------------------------
[ 69.513437] rmmod/3786 is trying to acquire lock:
[ 69.513437] (s_active#99){++++.+}, at: [<ffffffff8127d4a3>] kernfs_remove+0x23/0x40
[ 69.513437]
but task is already holding lock:
[ 69.513437] (klp_mutex){+.+.+.}, at: [<ffffffff810de383>] klp_unregister_patch+0x23/0xc0
[ 69.513437]
which lock already depends on the new lock.
[ 69.513437]
the existing dependency chain (in reverse order) is:
[ 69.513437]
-> #1 (klp_mutex){+.+.+.}:
[ 69.513437] [<ffffffff810bfd5d>] lock_acquire+0xad/0x130
[ 69.513437] [<ffffffff81916f04>] mutex_lock_nested+0x44/0x380
[ 69.513437] [<ffffffff810debe0>] enabled_store+0x50/0xc0
[ 69.513437] [<ffffffff813fa4ef>] kobj_attr_store+0xf/0x20
[ 69.513437] [<ffffffff8127ef44>] sysfs_kf_write+0x44/0x60
[ 69.513437] [<ffffffff8127e564>] kernfs_fop_write+0x144/0x190
[ 69.513437] [<ffffffff811fb648>] __vfs_write+0x28/0xe0
[ 69.513437] [<ffffffff811fbd02>] vfs_write+0xa2/0x1a0
[ 69.513437] [<ffffffff811fca19>] SyS_write+0x49/0xa0
[ 69.513437] [<ffffffff8191a2f2>] entry_SYSCALL_64_fastpath+0x12/0x76
[ 69.513437]
-> #0 (s_active#99){++++.+}:
[ 69.513437] [<ffffffff810beead>] __lock_acquire+0x14fd/0x1bd0
[ 69.513437] [<ffffffff810bfd5d>] lock_acquire+0xad/0x130
[ 69.513437] [<ffffffff8127c745>] __kernfs_remove+0x1f5/0x2b0
[ 69.513437] [<ffffffff8127d4a3>] kernfs_remove+0x23/0x40
[ 69.513437] [<ffffffff8127f881>] sysfs_remove_dir+0x51/0x80
[ 69.513437] [<ffffffff813fa6e8>] kobject_del+0x18/0x50
[ 69.513437] [<ffffffff813fa774>] kobject_cleanup+0x54/0x70
[ 69.513437] [<ffffffff813fa645>] kobject_put+0x25/0x50
[ 69.513437] [<ffffffff810de3f8>] klp_unregister_patch+0x98/0xc0
[ 69.513437] [<ffffffffa00000c5>] livepatch_exit+0x25/0xf60 [livepatch_sample]
[ 69.513437] [<ffffffff810fd21c>] SyS_delete_module+0x16c/0x1d0
[ 69.513437] [<ffffffff8191a2f2>] entry_SYSCALL_64_fastpath+0x12/0x76
[ 69.513437]
other info that might help us debug this:
[ 69.513437] Possible unsafe locking scenario:
[ 69.513437] CPU0 CPU1
[ 69.513437] ---- ----
[ 69.513437] lock(klp_mutex);
[ 69.513437] lock(s_active#99);
[ 69.513437] lock(klp_mutex);
[ 69.513437] lock(s_active#99);
[ 69.513437]
*** DEADLOCK ***
Signed-off-by: Petr Mladek <pmladek@...e.com>
---
kernel/livepatch/core.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 27712384c69e..e2f00f32bb00 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -612,7 +612,19 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
patch = container_of(kobj, struct klp_patch, kobj);
- mutex_lock(&klp_mutex);
+ /*
+ * Avoid a deadlock with kobject_put(&patch->kobj) that is
+ * called under klp_mutex. Bail out when the patch is not
+ * longer registered.
+ */
+ if (!mutex_trylock(&klp_mutex)) {
+ if (!klp_is_patch_registered(patch))
+ return -EINVAL;
+ /* Do not spin with trylock that bounce cache lines. */
+ while (mutex_is_locked(&klp_mutex) &&
+ klp_is_patch_registered(patch))
+ cond_resched();
+ }
if (val == patch->state) {
/* already in requested state */
--
1.8.5.6
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists