lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <56A159C0.2020000@amd.com>
Date:	Thu, 21 Jan 2016 17:20:48 -0500
From:	Felix Kuehling <felix.kuehling@....com>
To:	<linux-kernel@...r.kernel.org>
Subject: Lockdep incorrectly complaining about circular dependencies involving
 read-locks

I'm running into circular lock dependencies reported by lockdep that
involve read-locks and should not be flagged as deadlocks at all. I
wrote a very simple test function that demonstrates the problem:

> static void test_lockdep(void)
> {
> 	struct mutex fktest_m;
> 	struct rw_semaphore fktest_s;
>
> 	mutex_init(&fktest_m);
> 	init_rwsem(&fktest_s);
>
> 	down_read(&fktest_s);
> 	mutex_lock(&fktest_m);
> 	mutex_unlock(&fktest_m);
> 	up_read(&fktest_s);
>
> 	mutex_lock(&fktest_m);
> 	down_read(&fktest_s);
> 	up_read(&fktest_s);
> 	mutex_unlock(&fktest_m);
>
> 	mutex_destroy(&fktest_m);
> }

It sets up a circular lock dependency between a mutex and a read-write
semaphore. However, the semaphore is only ever locked for reading. As I
understand it, there is no potential for a deadlock here because
multiple readers don't exclude each other. However, I get this:

> [   10.832547] 
> [   10.834122] ======================================================
> [   10.840655] [ INFO: possible circular locking dependency detected ]
> [   10.847284] 4.4.0-kfd #3 Tainted: G            E  
> [   10.852356] -------------------------------------------------------
> [   10.858989] systemd-udevd/2385 is trying to acquire lock:
> [   10.864695]  (&fktest_s){.+.+..}, at: [<ffffffffc0212463>] test_lockdep+0x9a/0xd4 [amdgpu]
> [   10.873474] 
> [   10.873474] but task is already holding lock:
> [   10.879633]  (&fktest_m){+.+...}, at: [<ffffffffc0212457>] test_lockdep+0x8e/0xd4 [amdgpu]
> [   10.888418] 
> [   10.888418] which lock already depends on the new lock.
> [   10.888418] 
> [   10.897071] 
> [   10.897071] the existing dependency chain (in reverse order) is:
> [   10.904981] 
> -> #1 (&fktest_m){+.+...}:
> [   10.909138]        [<ffffffff810ada5d>] lock_acquire+0x6d/0x90
> [   10.915309]        [<ffffffff8190ddaa>] mutex_lock_nested+0x4a/0x3a0
> [   10.922040]        [<ffffffffc0212431>] test_lockdep+0x68/0xd4 [amdgpu]
> [   10.929042]        [<ffffffffc0295009>] amdgpu_init+0x9/0x7b [amdgpu]
> [   10.935856]        [<ffffffff810003f8>] do_one_initcall+0xc8/0x200
> [   10.942449]        [<ffffffff8113f5ad>] do_init_module+0x56/0x1d8
> [   10.948919]        [<ffffffff810e83b1>] load_module+0x1b91/0x2460
> [   10.955376]        [<ffffffff810e8e5b>] SyS_finit_module+0x7b/0xa0
> [   10.961961]        [<ffffffff81910572>] entry_SYSCALL_64_fastpath+0x12/0x76
> [   10.969388] 
> -> #0 (&fktest_s){.+.+..}:
> [   10.973569]        [<ffffffff810acc2a>] __lock_acquire+0x100a/0x16b0
> [   10.980315]        [<ffffffff810ada5d>] lock_acquire+0x6d/0x90
> [   10.986502]        [<ffffffff8190e884>] down_read+0x34/0x50
> [   11.002586]        [<ffffffffc0212463>] test_lockdep+0x9a/0xd4 [amdgpu]
> [   11.009610]        [<ffffffffc0295009>] amdgpu_init+0x9/0x7b [amdgpu]
> [   11.016453]        [<ffffffff810003f8>] do_one_initcall+0xc8/0x200
> [   11.023001]        [<ffffffff8113f5ad>] do_init_module+0x56/0x1d8
> [   11.029462]        [<ffffffff810e83b1>] load_module+0x1b91/0x2460
> [   11.035927]        [<ffffffff810e8e5b>] SyS_finit_module+0x7b/0xa0
> [   11.042478]        [<ffffffff81910572>] entry_SYSCALL_64_fastpath+0x12/0x76
> [   11.049860] 
> [   11.049860] other info that might help us debug this:
> [   11.049860] 
> [   11.058356]  Possible unsafe locking scenario:
> [   11.058356] 
> [   11.064644]        CPU0                    CPU1
> [   11.069436]        ----                    ----
> [   11.074229]   lock(&fktest_m);
> [   11.077465]                                lock(&fktest_s);
> [   11.083376]                                lock(&fktest_m);
> [   11.089288]   lock(&fktest_s);
> [   11.092542] 
> [   11.092542]  *** DEADLOCK ***
> [   11.092542] 
> [   11.098819] 1 lock held by systemd-udevd/2385:
> [   11.103530]  #0:  (&fktest_m){+.+...}, at: [<ffffffffc0212457>] test_lockdep+0x8e/0xd4 [amdgpu]
> [   11.112780] 
> [   11.112780] stack backtrace:
> [   11.117388] CPU: 7 PID: 2385 Comm: systemd-udevd Tainted: G            E   4.4.0-kfd #3
> [   11.125840] Hardware name: ASUS All Series/Z97-PRO(Wi-Fi ac)/USB 3.1, BIOS 2401 04/27/2015
> [   11.134593]  ffffffff82714a90 ffff8808335d7a70 ffffffff8144e3cb ffffffff82714a90
> [   11.142421]  ffff8808335d7ab0 ffffffff8113ec7a ffff8808335d7b00 0000000000000000
> [   11.150248]  ffff8808335a9ee0 ffff8808335a9f08 ffff8808335a96c0 ffff8808335a9f08
> [   11.158076] Call Trace:
> [   11.160655]  [<ffffffff8144e3cb>] dump_stack+0x44/0x59
> [   11.166105]  [<ffffffff8113ec7a>] print_circular_bug+0x1f9/0x207
> [   11.172457]  [<ffffffff810acc2a>] __lock_acquire+0x100a/0x16b0
> [   11.178619]  [<ffffffff810ab706>] ? mark_held_locks+0x66/0x90
> [   11.184693]  [<ffffffffc0295000>] ? 0xffffffffc0295000
> [   11.190127]  [<ffffffff810ada5d>] lock_acquire+0x6d/0x90
> [   11.195758]  [<ffffffffc0212463>] ? test_lockdep+0x9a/0xd4 [amdgpu]
> [   11.202376]  [<ffffffff8190e884>] down_read+0x34/0x50
> [   11.207729]  [<ffffffffc0212463>] ? test_lockdep+0x9a/0xd4 [amdgpu]
> [   11.214370]  [<ffffffffc0212463>] test_lockdep+0x9a/0xd4 [amdgpu]
> [   11.220847]  [<ffffffffc0295009>] amdgpu_init+0x9/0x7b [amdgpu]
> [   11.227096]  [<ffffffff810003f8>] do_one_initcall+0xc8/0x200
> [   11.233083]  [<ffffffff8113f574>] ? do_init_module+0x1d/0x1d8
> [   11.239153]  [<ffffffff8119bb4f>] ? kmem_cache_alloc+0xbf/0x180
> [   11.245421]  [<ffffffff8113f5ad>] do_init_module+0x56/0x1d8
> [   11.251307]  [<ffffffff810e83b1>] load_module+0x1b91/0x2460
> [   11.257196]  [<ffffffff810e58e0>] ? __symbol_put+0x30/0x30
> [   11.262993]  [<ffffffff810e5c06>] ? copy_module_from_fd.isra.61+0xf6/0x150
> [   11.270261]  [<ffffffff810e8e5b>] SyS_finit_module+0x7b/0xa0
> [   11.276250]  [<ffffffff81910572>] entry_SYSCALL_64_fastpath+0x12/0x76

I confirmed my results with the latest master branch of
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git, but
I was seeing the same thing on a 4.1-based kernel.

Relevant kernel config bits:
> $ grep LOCKDEP .config
> CONFIG_LOCKDEP_SUPPORT=y
> CONFIG_LOCKDEP=y
> # CONFIG_DEBUG_LOCKDEP is not set

I'm reading lockdep code now, trying to understand how lockdep works in
detail, and how it could properly deal with read-locks. But it will
probably take me a few more days or weeks to figure it out by myself (or
be convinced it can't be done). I'd appreciate feedback from someone
more familiar with the code.

Thank you,
  Felix


P.S.: I'm not subscribed to the list. I'll be watching the archives or
digests, but please CC me on replies.

-- 
F e l i x   K u e h l i n g
SMTS Software Development Engineer | Vertical Workstation/Compute
1 Commerce Valley Dr. East, Markham, ON L3T 7X6 Canada
(O) +1(289)695-1597
   _     _   _   _____   _____
  / \   | \ / | |  _  \  \ _  |
 / A \  | \M/ | | |D) )  /|_| |
/_/ \_\ |_| |_| |_____/ |__/ \|   facebook.com/AMD | amd.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ