lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aca0ce8b1002140657l4f767635ob9b1611e4da2540b@mail.gmail.com>
Date:	Sun, 14 Feb 2010 06:57:37 -0800
From:	Eric Biederman <ebiederm@...stanetworks.com>
To:	John Kacur <jkacur@...hat.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Greg Kroah-Hartman <gregkh@...e.de>, Tejun Heo <tj@...nel.org>,
	Serge Hallyn <serue@...ibm.com>,
	"David P. Quigley" <dpquigl@...ho.nsa.gov>,
	James Morris <jmorris@...ei.org>
Subject: Re: [BUG]: Possibe recursive locking detected in sysfs

On Sun, Feb 14, 2010 at 5:10 AM, John Kacur <jkacur@...hat.com> wrote:
> On Thu, Feb 11, 2010 at 12:11 PM, John Kacur <jkacur@...hat.com> wrote:
>> I'm not sure if this one has already been reported. - thanks.
>>
>> Feb 11 07:24:15 localhost kernel: [ INFO: possible recursive locking detected ]
>> Feb 11 07:24:15 localhost kernel: 2.6.33-rc7 #1
>> Feb 11 07:24:15 localhost kernel: ---------------------------------------------
>> Feb 11 07:24:15 localhost kernel: 94cpufreq/2933 is trying to acquire lock:
>> Feb 11 07:24:15 localhost kernel: (s_active){++++.+}, at:
>> [<ffffffff81157408>] sysfs_hash_and_remove+0x53/0x6a
>> Feb 11 07:24:15 localhost kernel:
>> Feb 11 07:24:15 localhost kernel: but task is already holding lock:
>> Feb 11 07:24:15 localhost kernel: (s_active){++++.+}, at:
>> [<ffffffff81159530>] sysfs_get_active_two+0x24/0x48
>> Feb 11 07:24:15 localhost kernel:
>> Feb 11 07:24:15 localhost kernel: other info that might help us debug this:
>> Feb 11 07:24:15 localhost kernel: 4 locks held by 94cpufreq/2933:
>> Feb 11 07:24:15 localhost kernel: #0:  (&buffer->mutex){+.+.+.}, at:
>> [<ffffffff81157dbd>] sysfs_write_file+0x3e/0x12b
>> Feb 11 07:24:15 localhost kernel: #1:  (s_active){++++.+}, at:
>> [<ffffffff81159530>] sysfs_get_active_two+0x24/0x48
>> Feb 11 07:24:15 localhost kernel: #2:  (s_active){++++.+}, at:
>> [<ffffffff8115953d>] sysfs_get_active_two+0x31/0x48
>> Feb 11 07:24:15 localhost kernel: #3:  (dbs_mutex){+.+.+.}, at:
>> [<ffffffffa04bcdb3>] cpufreq_governor_dbs+0x29b/0x348
>> [cpufreq_ondemand]
>> Feb 11 07:24:15 localhost kernel:
>> Feb 11 07:24:15 localhost kernel: stack backtrace:
>> Feb 11 07:24:15 localhost kernel: Pid: 2933, comm: 94cpufreq Not
>> tainted 2.6.33-rc7 #1
>> Feb 11 07:24:15 localhost kernel: Call Trace:
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81071f53>]
>> __lock_acquire+0xcf6/0xd8b
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81070914>] ?
>> debug_check_no_locks_freed+0x120/0x12f
>> Feb 11 07:24:15 localhost kernel: [<ffffffff8107020e>] ?
>> trace_hardirqs_on_caller+0x11f/0x14a
>> Feb 11 07:24:15 localhost kernel: [<ffffffff810720c0>] lock_acquire+0xd8/0xf5
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81157408>] ?
>> sysfs_hash_and_remove+0x53/0x6a
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81159231>]
>> sysfs_addrm_finish+0xe1/0x175
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81157408>] ?
>> sysfs_hash_and_remove+0x53/0x6a
>> Feb 11 07:24:15 localhost kernel: [<ffffffff8137b1b5>] ?
>> sub_preempt_count+0xa3/0xb6
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81157408>]
>> sysfs_hash_and_remove+0x53/0x6a
>> Feb 11 07:24:15 localhost kernel: [<ffffffff8115a59d>]
>> sysfs_remove_group+0x91/0xc9
>> Feb 11 07:24:15 localhost kernel: [<ffffffffa04bcdc6>]
>> cpufreq_governor_dbs+0x2ae/0x348 [cpufreq_ondemand]
>> Feb 11 07:24:15 localhost kernel: [<ffffffff8137b1b5>] ?
>> sub_preempt_count+0xa3/0xb6
>> Feb 11 07:24:15 localhost kernel: [<ffffffff812ab9cc>]
>> __cpufreq_governor+0x89/0xc7
>> Feb 11 07:24:15 localhost kernel: [<ffffffff812ac7b9>]
>> __cpufreq_set_policy+0x18e/0x22a
>> Feb 11 07:24:15 localhost kernel: [<ffffffff812accd6>]
>> store_scaling_governor+0x199/0x1ed
>> Feb 11 07:24:15 localhost kernel: [<ffffffff812ad50d>] ? handle_update+0x0/0x39
>> Feb 11 07:24:15 localhost kernel: [<ffffffff8137685d>] ? down_write+0x76/0x7e
>> Feb 11 07:24:15 localhost kernel: [<ffffffff812ac607>] store+0x67/0x8b
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81157e75>]
>> sysfs_write_file+0xf6/0x12b
>> Feb 11 07:24:15 localhost kernel: [<ffffffff811003c9>] vfs_write+0xb0/0x10a
>> Feb 11 07:24:15 localhost kernel: [<ffffffff811004f1>] sys_write+0x4c/0x75
>> Feb 11 07:24:15 localhost kernel: [<ffffffff81002d9b>]
>> system_call_fastpath+0x16/0x1b
>>
>
> This is still present in 2.6.33-rc8.
> I can trigger it every time. I have T500 laptop, the trick is to boot
> it up without plugging it in, so it is working on battery power, and
> then to shut the lid.
> first you get the message from above, and then it does a few more
> things before it freezes and I have to hard reboot it.
> The above problem does not occur if I have the laptop plugged into the wall.
>
> Here is the 2.6.33-rc8 version of above. If anyone has further
> suggestions for debugging, pls let me know.
> I've cced the folks that get_maintainer.pl reports for
> fs/sysfs/inode.c and fs/sysfs/dir.c

We have a real hang and a possibly spurious lockdep report.
Could you try my latest patches that I sent to gregkh to clear up the sysfs
lockdep reports?   I don't expect it will fix your issue but it will
at least give
us a better idea of where to look.

Eric


> Feb 14 07:35:49 localhost kernel: =============================================
> Feb 14 07:35:49 localhost kernel: [ INFO: possible recursive locking detected ]
> Feb 14 07:35:49 localhost kernel: 2.6.33-rc8 #1
> Feb 14 07:35:49 localhost kernel: ---------------------------------------------
> Feb 14 07:35:49 localhost kernel: 94cpufreq/2914 is trying to acquire lock:
> Feb 14 07:35:49 localhost kernel: (s_active){++++.+}, at:
> [<ffffffff81157478>] sysfs_hash_and_remove+0x53/0x6a
> Feb 14 07:35:49 localhost kernel:
> Feb 14 07:35:49 localhost kernel: but task is already holding lock:
> Feb 14 07:35:49 localhost kernel: (s_active){++++.+}, at:
> [<ffffffff811595a0>] sysfs_get_active_two+0x24/0x48
> Feb 14 07:35:49 localhost kernel:
> Feb 14 07:35:49 localhost kernel: other info that might help us debug this:
> Feb 14 07:35:49 localhost kernel: 4 locks held by 94cpufreq/2914:
> Feb 14 07:35:49 localhost kernel: #0:  (&buffer->mutex){+.+.+.}, at:
> [<ffffffff81157e2d>] sysfs_write_file+0x3e/0x12b
> Feb 14 07:35:49 localhost kernel: #1:  (s_active){++++.+}, at:
> [<ffffffff811595a0>] sysfs_get_active_two+0x24/0x48
> Feb 14 07:35:49 localhost kernel: #2:  (s_active){++++.+}, at:
> [<ffffffff811595ad>] sysfs_get_active_two+0x31/0x48
> Feb 14 07:35:49 localhost kernel: #3:  (dbs_mutex){+.+.+.}, at:
> [<ffffffffa04c0dbe>] cpufreq_governor_dbs+0x29b/0x345
> [cpufreq_ondemand]
> Feb 14 07:35:49 localhost kernel:
> Feb 14 07:35:49 localhost kernel: stack backtrace:
> Feb 14 07:35:49 localhost kernel: Pid: 2914, comm: 94cpufreq Not
> tainted 2.6.33-rc8 #1
> Feb 14 07:35:49 localhost kernel: Call Trace:
> Feb 14 07:35:49 localhost kernel: [<ffffffff81071f53>]
> __lock_acquire+0xcf6/0xd8b
> Feb 14 07:35:49 localhost kernel: [<ffffffff81070914>] ?
> debug_check_no_locks_freed+0x120/0x12f
> Feb 14 07:35:49 localhost kernel: [<ffffffff8107020e>] ?
> trace_hardirqs_on_caller+0x11f/0x14a
> Feb 14 07:35:49 localhost kernel: [<ffffffff810720c0>] lock_acquire+0xd8/0xf5
> Feb 14 07:35:49 localhost kernel: [<ffffffff81157478>] ?
> sysfs_hash_and_remove+0x53/0x6a
> Feb 14 07:35:49 localhost kernel: [<ffffffff811592a1>]
> sysfs_addrm_finish+0xe1/0x175
> Feb 14 07:35:49 localhost kernel: [<ffffffff81157478>] ?
> sysfs_hash_and_remove+0x53/0x6a
> Feb 14 07:35:49 localhost kernel: [<ffffffff8137b335>] ?
> sub_preempt_count+0xa3/0xb6
> Feb 14 07:35:49 localhost kernel: [<ffffffff81157478>]
> sysfs_hash_and_remove+0x53/0x6a
> Feb 14 07:35:49 localhost kernel: [<ffffffff8115a60d>]
> sysfs_remove_group+0x91/0xc9
> Feb 14 07:35:49 localhost kernel: [<ffffffffa04c0dd1>]
> cpufreq_governor_dbs+0x2ae/0x345 [cpufreq_ondemand]
> Feb 14 07:35:49 localhost kernel: [<ffffffff8137b335>] ?
> sub_preempt_count+0xa3/0xb6
> Feb 14 07:35:49 localhost kernel: [<ffffffff812aba60>]
> __cpufreq_governor+0x89/0xc7
> Feb 14 07:35:49 localhost kernel: [<ffffffff812ac84d>]
> __cpufreq_set_policy+0x18e/0x22a
> Feb 14 07:35:49 localhost kernel: [<ffffffff812acd6a>]
> store_scaling_governor+0x199/0x1ed
> Feb 14 07:35:49 localhost kernel: [<ffffffff812ad5a1>] ? handle_update+0x0/0x39
> Feb 14 07:35:49 localhost kernel: [<ffffffff813769ad>] ? down_write+0x76/0x7e
> Feb 14 07:35:49 localhost kernel: [<ffffffff812ac69b>] store+0x67/0x8b
> Feb 14 07:35:49 localhost kernel: [<ffffffff81157ee5>]
> sysfs_write_file+0xf6/0x12b
> Feb 14 07:35:49 localhost kernel: [<ffffffff811003c9>] vfs_write+0xb0/0x10a
> Feb 14 07:35:49 localhost kernel: [<ffffffff811004f1>] sys_write+0x4c/0x75
> Feb 14 07:35:49 localhost kernel: [<ffffffff81002d9b>]
> system_call_fastpath+0x16/0x1b
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ