lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55FEC99A.7050506@oracle.com>
Date:	Sun, 20 Sep 2015 10:58:34 -0400
From:	Sasha Levin <sasha.levin@...cle.com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org
CC:	mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
	rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
	dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
	bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 14/19] rcu: Extend expedited funnel locking
 to rcu_data structure

On 07/17/2015 07:29 PM, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> 
> The strictly rcu_node based funnel-locking scheme works well in many
> cases, but systems with CONFIG_RCU_FANOUT_LEAF=64 won't necessarily get
> all that much concurrency.  This commit therefore extends the funnel
> locking into the per-CPU rcu_data structure, providing concurrency equal
> to the number of CPUs.

Hi Paul,

I'm seeing the following lockdep warning:

[1625143.116818] ======================================================
[1625143.117918] [ INFO: possible circular locking dependency detected ]
[1625143.118853] 4.3.0-rc1-next-20150918-sasha-00081-g4b7392a-dirty #2565 Not tainted
[1625143.119938] -------------------------------------------------------
[1625143.120868] trinity-c134/25451 is trying to acquire lock:
[1625143.121686] (&rdp->exp_funnel_mutex){+.+...}, at: exp_funnel_lock (kernel/rcu/tree.c:3439)
[1625143.123364] Mutex: counter: 1 owner: None
[1625143.124052]
[1625143.124052] but task is already holding lock:
[1625143.125045] (rcu_node_exp_0){+.+...}, at: exp_funnel_lock (kernel/rcu/tree.c:3419)
[1625143.126534]
[1625143.126534] which lock already depends on the new lock.
[1625143.126534]
[1625143.127893]
[1625143.127893] the existing dependency chain (in reverse order) is:
[1625143.129137]
-> #1 (rcu_node_exp_0){+.+...}:
[1625143.129978] lock_acquire (kernel/locking/lockdep.c:3620)
[1625143.131006] mutex_lock_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:617)
[1625143.133122] exp_funnel_lock (kernel/rcu/tree.c:3445)
[1625143.134014] synchronize_rcu_expedited (kernel/rcu/tree_plugin.h:710)
[1625143.135180] synchronize_rcu (kernel/rcu/tree_plugin.h:532)
[1625143.136228] rds_bind (net/rds/bind.c:207)
[1625143.137214] SYSC_bind (net/socket.c:1383)
[1625143.138243] SyS_bind (net/socket.c:1369)
[1625143.139170] tracesys_phase2 (arch/x86/entry/entry_64.S:273)
[1625143.140206]
-> #0 (&rdp->exp_funnel_mutex){+.+...}:
[1625143.141165] __lock_acquire (kernel/locking/lockdep.c:1877 kernel/locking/lockdep.c:1982 kernel/locking/lockdep.c:2168 kernel/locking/lockdep.c:3239)
[1625143.142230] lock_acquire (kernel/locking/lockdep.c:3620)
[1625143.143388] mutex_lock_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:617)
[1625143.144462] exp_funnel_lock (kernel/rcu/tree.c:3439)
[1625143.145515] synchronize_sched_expedited (kernel/rcu/tree.c:3550 (discriminator 58))
[1625143.146739] synchronize_rcu_expedited (kernel/rcu/tree_plugin.h:725)
[1625143.147893] synchronize_rcu (kernel/rcu/tree_plugin.h:532)
[1625143.148932] rds_release (net/rds/af_rds.c:83)
[1625143.149921] sock_release (net/socket.c:572)
[1625143.150922] sock_close (net/socket.c:1024)
[1625143.151893] __fput (fs/file_table.c:209)
[1625143.152869] ____fput (fs/file_table.c:245)
[1625143.153799] task_work_run (kernel/task_work.c:117 (discriminator 1))
[1625143.155126] do_exit (kernel/exit.c:747)
[1625143.156124] do_group_exit (./arch/x86/include/asm/current.h:14 kernel/exit.c:859)
[1625143.157134] get_signal (kernel/signal.c:2307)
[1625143.158142] do_signal (arch/x86/kernel/signal.c:709)
[1625143.159129] prepare_exit_to_usermode (arch/x86/entry/common.c:251)
[1625143.160231] syscall_return_slowpath (arch/x86/entry/common.c:318)
[1625143.161443] int_ret_from_sys_call (arch/x86/entry/entry_64.S:285)
[1625143.162431]
[1625143.162431] other info that might help us debug this:
[1625143.162431]
[1625143.163737]  Possible unsafe locking scenario:
[1625143.163737]
[1625143.164724]        CPU0                    CPU1
[1625143.165466]        ----                    ----
[1625143.166198]   lock(rcu_node_exp_0);
[1625143.166841]                                lock(&rdp->exp_funnel_mutex);
[1625143.168193]                                lock(rcu_node_exp_0);
[1625143.169288]   lock(&rdp->exp_funnel_mutex);
[1625143.170064]
[1625143.170064]  *** DEADLOCK ***
[1625143.170064]
[1625143.171076] 2 locks held by trinity-c134/25451:
[1625143.171816] #0: (rcu_node_exp_0){+.+...}, at: exp_funnel_lock (kernel/rcu/tree.c:3419)
[1625143.173458] #1: (cpu_hotplug.lock){++++++}, at: try_get_online_cpus (kernel/cpu.c:111)
[1625143.175090]
[1625143.175090] stack backtrace:
[1625143.176095] CPU: 4 PID: 25451 Comm: trinity-c134 Not tainted 4.3.0-rc1-next-20150918-sasha-00081-g4b7392a-dirty #2565
[1625143.177833]  ffffffffad1e2130 ffff880169047250 ffffffff9efe97ba ffffffffad273df0
[1625143.179224]  ffff8801690472a0 ffffffff9d46b701 ffff880169047370 dffffc0000000000
[1625143.180543]  0000000069038d30 ffff880169038cc0 ffff880169038cf2 ffff880169038000
[1625143.181845] Call Trace:
[1625143.182326] dump_stack (lib/dump_stack.c:52)
[1625143.183212] print_circular_bug (kernel/locking/lockdep.c:1252)
[1625143.184186] __lock_acquire (kernel/locking/lockdep.c:1877 kernel/locking/lockdep.c:1982 kernel/locking/lockdep.c:2168 kernel/locking/lockdep.c:3239)
[1625143.187222] lock_acquire (kernel/locking/lockdep.c:3620)
[1625143.189150] mutex_lock_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:617)
[1625143.195413] exp_funnel_lock (kernel/rcu/tree.c:3439)
[1625143.196372] synchronize_sched_expedited (kernel/rcu/tree.c:3550 (discriminator 58))
[1625143.204736] synchronize_rcu_expedited (kernel/rcu/tree_plugin.h:725)
[1625143.210029] synchronize_rcu (kernel/rcu/tree_plugin.h:532)
[1625143.215529] rds_release (net/rds/af_rds.c:83)
[1625143.216416] sock_release (net/socket.c:572)
[1625143.217333] sock_close (net/socket.c:1024)
[1625143.218213] __fput (fs/file_table.c:209)
[1625143.219052] ____fput (fs/file_table.c:245)
[1625143.219930] task_work_run (kernel/task_work.c:117 (discriminator 1))
[1625143.221929] do_exit (kernel/exit.c:747)
[1625143.234580] do_group_exit (./arch/x86/include/asm/current.h:14 kernel/exit.c:859)
[1625143.236698] get_signal (kernel/signal.c:2307)
[1625143.238670] do_signal (arch/x86/kernel/signal.c:709)
[1625143.257306] prepare_exit_to_usermode (arch/x86/entry/common.c:251)
[1625143.259696] syscall_return_slowpath (arch/x86/entry/common.c:318)
[1625143.262075] int_ret_from_sys_call (arch/x86/entry/entry_64.S:285)


Thanks,
Sasha
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ