lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 13 Jun 2009 21:39:48 +0200
From:	Simon Holm Thøgersen <odie@...aau.dk>
To:	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Paulus Mackerras <paulus@...ba.org>
Cc:	linux-kernel <linux-kernel@...r.kernel.org>
Subject: perfcounters: lockdep warning

Just tried kicking the tires of performance counters and perf and got
the following warning that doesn't look like have been reported already.

[ 7765.594591] [ INFO: possible circular locking dependency detected ]
[ 7765.594602] 2.6.30debug-03217-gf3ad116 #47
[ 7765.594609] -------------------------------------------------------
[ 7765.594619] perf/14176 is trying to acquire lock:
[ 7765.594628]  (&ctx->mutex){+.+.+.}, at: [<c016f366>] perf_counter_for_each_sibling+0x50/0x7e
[ 7765.594660] 
[ 7765.594663] but task is already holding lock:
[ 7765.594672]  (&counter->child_mutex){+.+...}, at: [<c0172180>] perf_ioctl+0x184/0x1de
[ 7765.594696] 
[ 7765.594699] which lock already depends on the new lock.
[ 7765.594703] 
[ 7765.594711] 
[ 7765.594714] the existing dependency chain (in reverse order) is:
[ 7765.594723] 
[ 7765.594726] -> #1 (&counter->child_mutex){+.+...}:
[ 7765.594744]        [<c0146f79>] __lock_acquire+0x9a5/0xb11
[ 7765.594765]        [<c014719e>] lock_acquire+0xb9/0xdb
[ 7765.594779]        [<c0340664>] __mutex_lock_common+0x42/0x3c8
[ 7765.594798]        [<c0340a82>] mutex_lock_nested+0x2e/0x36
[ 7765.594814]        [<c016f8ad>] inherit_counter+0xdb/0x112
[ 7765.594830]        [<c01705bd>] perf_counter_init_task+0x15b/0x23f
[ 7765.594847]        [<c0124338>] copy_process+0x4fb/0xfc8
[ 7765.594865]        [<c0124f1c>] do_fork+0x117/0x2b4
[ 7765.594881]        [<c0101f4f>] sys_clone+0x29/0x30
[ 7765.594897]        [<c01032e8>] sysenter_do_call+0x12/0x3c
[ 7765.594913]        [<ffffffff>] 0xffffffff
[ 7765.594967] 
[ 7765.594970] -> #0 (&ctx->mutex){+.+.+.}:
[ 7765.594987]        [<c0146e8c>] __lock_acquire+0x8b8/0xb11
[ 7765.595004]        [<c014719e>] lock_acquire+0xb9/0xdb
[ 7765.595018]        [<c0340664>] __mutex_lock_common+0x42/0x3c8
[ 7765.595035]        [<c0340a82>] mutex_lock_nested+0x2e/0x36
[ 7765.595050]        [<c016f366>] perf_counter_for_each_sibling+0x50/0x7e
[ 7765.595067]        [<c0172189>] perf_ioctl+0x18d/0x1de
[ 7765.595083]        [<c019fa6e>] vfs_ioctl+0x27/0x6e
[ 7765.595100]        [<c019ff4f>] do_vfs_ioctl+0x45a/0x48c
[ 7765.595116]        [<c019ffb2>] sys_ioctl+0x31/0x4a
[ 7765.595132]        [<c01032e8>] sysenter_do_call+0x12/0x3c
[ 7765.595147]        [<ffffffff>] 0xffffffff
[ 7765.595163] 
[ 7765.595166] other info that might help us debug this:
[ 7765.595170] 
[ 7765.595180] 1 lock held by perf/14176:
[ 7765.595188]  #0:  (&counter->child_mutex){+.+...}, at: [<c0172180>] perf_ioctl+0x184/0x1de
[ 7765.595215] 
[ 7765.595218] stack backtrace:
[ 7765.595230] Pid: 14176, comm: perf Not tainted 2.6.30debug-03217-gf3ad116 #47
[ 7765.595240] Call Trace:
[ 7765.595254]  [<c033f7f9>] ? printk+0x14/0x16
[ 7765.595271]  [<c0146310>] print_circular_bug_tail+0x5c/0x67
[ 7765.595289]  [<c0146e8c>] __lock_acquire+0x8b8/0xb11
[ 7765.595306]  [<c014719e>] lock_acquire+0xb9/0xdb
[ 7765.595322]  [<c016f366>] ? perf_counter_for_each_sibling+0x50/0x7e
[ 7765.595338]  [<c0340664>] __mutex_lock_common+0x42/0x3c8
[ 7765.595354]  [<c016f366>] ? perf_counter_for_each_sibling+0x50/0x7e
[ 7765.595371]  [<c034099b>] ? __mutex_lock_common+0x379/0x3c8
[ 7765.595387]  [<c0340a82>] mutex_lock_nested+0x2e/0x36
[ 7765.595402]  [<c016f366>] ? perf_counter_for_each_sibling+0x50/0x7e
[ 7765.595419]  [<c016f366>] perf_counter_for_each_sibling+0x50/0x7e
[ 7765.595434]  [<c0170f78>] ? perf_counter_enable+0x0/0xad
[ 7765.595449]  [<c0171ffc>] ? perf_ioctl+0x0/0x1de
[ 7765.595464]  [<c0170f78>] ? perf_counter_enable+0x0/0xad
[ 7765.595479]  [<c0172189>] perf_ioctl+0x18d/0x1de
[ 7765.595494]  [<c0171ffc>] ? perf_ioctl+0x0/0x1de
[ 7765.595509]  [<c019fa6e>] vfs_ioctl+0x27/0x6e
[ 7765.595525]  [<c019ff4f>] do_vfs_ioctl+0x45a/0x48c
[ 7765.595540]  [<c0107003>] ? native_sched_clock+0x45/0x5e
[ 7765.595556]  [<c0144a37>] ? put_lock_stats+0x1e/0x29
[ 7765.595572]  [<c0144af8>] ? lock_release_holdtime+0xb6/0xbb
[ 7765.595589]  [<c0105e7b>] ? sys_mmap2+0x67/0x7f
[ 7765.595604]  [<c0103321>] ? sysenter_exit+0xf/0x1a
[ 7765.595620]  [<c019ffb2>] sys_ioctl+0x31/0x4a
[ 7765.595635]  [<c01032e8>] sysenter_do_call+0x12/0x3c

Peter: I guess you read both so it is probably not a real problem,
but you're listed in MAINTAINERS with two different emails.


Simon Holm Thøgersen

Download attachment ".config" of type "application/x-config" (58724 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ