[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150921041244.GF4029@linux.vnet.ibm.com>
Date: Sun, 20 Sep 2015 21:12:44 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Sasha Levin <sasha.levin@...cle.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 14/19] rcu: Extend expedited funnel locking
to rcu_data structure
On Sun, Sep 20, 2015 at 10:58:34AM -0400, Sasha Levin wrote:
> On 07/17/2015 07:29 PM, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> >
> > The strictly rcu_node based funnel-locking scheme works well in many
> > cases, but systems with CONFIG_RCU_FANOUT_LEAF=64 won't necessarily get
> > all that much concurrency. This commit therefore extends the funnel
> > locking into the per-CPU rcu_data structure, providing concurrency equal
> > to the number of CPUs.
>
> Hi Paul,
>
> I'm seeing the following lockdep warning:
>
> [1625143.116818] ======================================================
> [1625143.117918] [ INFO: possible circular locking dependency detected ]
> [1625143.118853] 4.3.0-rc1-next-20150918-sasha-00081-g4b7392a-dirty #2565 Not tainted
> [1625143.119938] -------------------------------------------------------
> [1625143.120868] trinity-c134/25451 is trying to acquire lock:
> [1625143.121686] (&rdp->exp_funnel_mutex){+.+...}, at: exp_funnel_lock (kernel/rcu/tree.c:3439)
> [1625143.123364] Mutex: counter: 1 owner: None
> [1625143.124052]
> [1625143.124052] but task is already holding lock:
> [1625143.125045] (rcu_node_exp_0){+.+...}, at: exp_funnel_lock (kernel/rcu/tree.c:3419)
> [1625143.126534]
> [1625143.126534] which lock already depends on the new lock.
> [1625143.126534]
> [1625143.127893]
> [1625143.127893] the existing dependency chain (in reverse order) is:
> [1625143.129137]
> -> #1 (rcu_node_exp_0){+.+...}:
> [1625143.129978] lock_acquire (kernel/locking/lockdep.c:3620)
> [1625143.131006] mutex_lock_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:617)
> [1625143.133122] exp_funnel_lock (kernel/rcu/tree.c:3445)
> [1625143.134014] synchronize_rcu_expedited (kernel/rcu/tree_plugin.h:710)
> [1625143.135180] synchronize_rcu (kernel/rcu/tree_plugin.h:532)
> [1625143.136228] rds_bind (net/rds/bind.c:207)
> [1625143.137214] SYSC_bind (net/socket.c:1383)
> [1625143.138243] SyS_bind (net/socket.c:1369)
> [1625143.139170] tracesys_phase2 (arch/x86/entry/entry_64.S:273)
> [1625143.140206]
> -> #0 (&rdp->exp_funnel_mutex){+.+...}:
> [1625143.141165] __lock_acquire (kernel/locking/lockdep.c:1877 kernel/locking/lockdep.c:1982 kernel/locking/lockdep.c:2168 kernel/locking/lockdep.c:3239)
> [1625143.142230] lock_acquire (kernel/locking/lockdep.c:3620)
> [1625143.143388] mutex_lock_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:617)
> [1625143.144462] exp_funnel_lock (kernel/rcu/tree.c:3439)
> [1625143.145515] synchronize_sched_expedited (kernel/rcu/tree.c:3550 (discriminator 58))
> [1625143.146739] synchronize_rcu_expedited (kernel/rcu/tree_plugin.h:725)
> [1625143.147893] synchronize_rcu (kernel/rcu/tree_plugin.h:532)
> [1625143.148932] rds_release (net/rds/af_rds.c:83)
> [1625143.149921] sock_release (net/socket.c:572)
> [1625143.150922] sock_close (net/socket.c:1024)
> [1625143.151893] __fput (fs/file_table.c:209)
> [1625143.152869] ____fput (fs/file_table.c:245)
> [1625143.153799] task_work_run (kernel/task_work.c:117 (discriminator 1))
> [1625143.155126] do_exit (kernel/exit.c:747)
> [1625143.156124] do_group_exit (./arch/x86/include/asm/current.h:14 kernel/exit.c:859)
> [1625143.157134] get_signal (kernel/signal.c:2307)
> [1625143.158142] do_signal (arch/x86/kernel/signal.c:709)
> [1625143.159129] prepare_exit_to_usermode (arch/x86/entry/common.c:251)
> [1625143.160231] syscall_return_slowpath (arch/x86/entry/common.c:318)
> [1625143.161443] int_ret_from_sys_call (arch/x86/entry/entry_64.S:285)
> [1625143.162431]
> [1625143.162431] other info that might help us debug this:
> [1625143.162431]
> [1625143.163737] Possible unsafe locking scenario:
> [1625143.163737]
> [1625143.164724] CPU0 CPU1
> [1625143.165466] ---- ----
> [1625143.166198] lock(rcu_node_exp_0);
> [1625143.166841] lock(&rdp->exp_funnel_mutex);
> [1625143.168193] lock(rcu_node_exp_0);
> [1625143.169288] lock(&rdp->exp_funnel_mutex);
> [1625143.170064]
> [1625143.170064] *** DEADLOCK ***
> [1625143.170064]
> [1625143.171076] 2 locks held by trinity-c134/25451:
> [1625143.171816] #0: (rcu_node_exp_0){+.+...}, at: exp_funnel_lock (kernel/rcu/tree.c:3419)
> [1625143.173458] #1: (cpu_hotplug.lock){++++++}, at: try_get_online_cpus (kernel/cpu.c:111)
> [1625143.175090]
> [1625143.175090] stack backtrace:
> [1625143.176095] CPU: 4 PID: 25451 Comm: trinity-c134 Not tainted 4.3.0-rc1-next-20150918-sasha-00081-g4b7392a-dirty #2565
> [1625143.177833] ffffffffad1e2130 ffff880169047250 ffffffff9efe97ba ffffffffad273df0
> [1625143.179224] ffff8801690472a0 ffffffff9d46b701 ffff880169047370 dffffc0000000000
> [1625143.180543] 0000000069038d30 ffff880169038cc0 ffff880169038cf2 ffff880169038000
> [1625143.181845] Call Trace:
> [1625143.182326] dump_stack (lib/dump_stack.c:52)
> [1625143.183212] print_circular_bug (kernel/locking/lockdep.c:1252)
> [1625143.184186] __lock_acquire (kernel/locking/lockdep.c:1877 kernel/locking/lockdep.c:1982 kernel/locking/lockdep.c:2168 kernel/locking/lockdep.c:3239)
> [1625143.187222] lock_acquire (kernel/locking/lockdep.c:3620)
> [1625143.189150] mutex_lock_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:617)
> [1625143.195413] exp_funnel_lock (kernel/rcu/tree.c:3439)
> [1625143.196372] synchronize_sched_expedited (kernel/rcu/tree.c:3550 (discriminator 58))
> [1625143.204736] synchronize_rcu_expedited (kernel/rcu/tree_plugin.h:725)
> [1625143.210029] synchronize_rcu (kernel/rcu/tree_plugin.h:532)
> [1625143.215529] rds_release (net/rds/af_rds.c:83)
> [1625143.216416] sock_release (net/socket.c:572)
> [1625143.217333] sock_close (net/socket.c:1024)
> [1625143.218213] __fput (fs/file_table.c:209)
> [1625143.219052] ____fput (fs/file_table.c:245)
> [1625143.219930] task_work_run (kernel/task_work.c:117 (discriminator 1))
> [1625143.221929] do_exit (kernel/exit.c:747)
> [1625143.234580] do_group_exit (./arch/x86/include/asm/current.h:14 kernel/exit.c:859)
> [1625143.236698] get_signal (kernel/signal.c:2307)
> [1625143.238670] do_signal (arch/x86/kernel/signal.c:709)
> [1625143.257306] prepare_exit_to_usermode (arch/x86/entry/common.c:251)
> [1625143.259696] syscall_return_slowpath (arch/x86/entry/common.c:318)
> [1625143.262075] int_ret_from_sys_call (arch/x86/entry/entry_64.S:285)
Hmmm... I created rdp->exp_funnel_mutex, but failed to give RCU-sched
its own lock class. Does the following untested patch fix things for you?
Thanx, Paul
------------------------------------------------------------------------
rcu: Suppress lockdep false positive for rcp->exp_funnel_mutex
In kernels built with CONFIG_PREEMPT=y, synchronize_rcu_expedited()
invokes synchronize_sched_expedited() while holding RCU-preempt's
root rcu_node structure's ->exp_funnel_mutex, which is acquired after
the rcu_data structure's ->exp_funnel_mutex. The first thing that
synchronize_sched_expedited() will do is acquire RCU-sched's rcu_data
structure's ->exp_funnel_mutex. There is no danger of an actual deadlock
because the locking order is always from RCU-preempt's expedited mutexes
to those of RCU-sched. Unfortunately, lockdep considers both rcu_data
structures' ->exp_funnel_mutex to be in the same lock class and therefore
reports a deadlock cycle.
This commit silences this false positive by placing RCU-sched's rcu_data
structures' ->exp_funnel_mutex locks into their own lock class.
Reported-by: Sasha Levin <sasha.levin@...cle.com>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 9f75f25cc5d9..775d36cc0050 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3868,6 +3868,7 @@ static void rcu_init_new_rnp(struct rcu_node *rnp_leaf)
static void __init
rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp)
{
+ static struct lock_class_key rcu_exp_sched_rdp_class;
unsigned long flags;
struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
struct rcu_node *rnp = rcu_get_root(rsp);
@@ -3883,6 +3884,10 @@ rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp)
mutex_init(&rdp->exp_funnel_mutex);
rcu_boot_init_nocb_percpu_data(rdp);
raw_spin_unlock_irqrestore(&rnp->lock, flags);
+ if (rsp == &rcu_sched_state)
+ lockdep_set_class_and_name(&rdp->exp_funnel_mutex,
+ &rcu_exp_sched_rdp_class,
+ "rcu_data_exp_sched");
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists