lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 25 Mar 2010 11:28:59 +0000
From:	David Howells <dhowells@...hat.com>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc:	dhowells@...hat.com,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	Johannes Berg <johannes@...solutions.net>
Subject: Re: [2.6.33-rc5] Weird deadlock when shutting down

Benjamin Herrenschmidt <benh@...nel.crashing.org> wrote:

> Johannes and I see this on our quad G5s... it -could- be similar to
> one reported a short while ago by Xiaotian Feng <xtfeng@...il.com>
> under the subject [2.6.33-rc4] sysfs lockdep warnings on cpu hotplug.
>  
> Basically, the machine deadlocks right after printing the following
> when doing a shutdown:
> 
> halt/4071 is trying to acquire lock:
>  (s_active){++++.+}, at: [<c0000000001ef868>] .sysfs_addrm_finish+0x58/0xc0
> 
> but task is already holding lock:
>  (&per_cpu(cpu_policy_rwsem, cpu)){+.+.+.}, at: [<c0000000004cd6ac>] .lock_policy_rwsem_write+0x84/0xf4
> 
> which lock already depends on the new lock.
> 
> the existing dependency chain (in reverse order) is:
> 
> <nothing else ... machine deadlocked here>

I see this now, with full backtrace:

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.34-rc2-cachefs #115
-------------------------------------------------------
halt/2291 is trying to acquire lock:
 (s_active#31){++++.+}, at: [<ffffffff81104950>] sysfs_addrm_finish+0x31/0x5a

but task is already holding lock:
 (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}, at: [<ffffffff812a3a92>] lock_policy_rwsem_write+0x4a/0x7b

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}:
       [<ffffffff81053da2>] __lock_acquire+0x1343/0x16cd
       [<ffffffff81054183>] lock_acquire+0x57/0x6d
       [<ffffffff813637b6>] down_write+0x3f/0x62
       [<ffffffff812a3a92>] lock_policy_rwsem_write+0x4a/0x7b
       [<ffffffff812a3b36>] store+0x39/0x79
       [<ffffffff81103589>] sysfs_write_file+0x103/0x13f
       [<ffffffff810afddc>] vfs_write+0xad/0x172
       [<ffffffff810aff5a>] sys_write+0x45/0x6c
       [<ffffffff81001eeb>] system_call_fastpath+0x16/0x1b

-> #0 (s_active#31){++++.+}:
       [<ffffffff81053a59>] __lock_acquire+0xffa/0x16cd
       [<ffffffff81054183>] lock_acquire+0x57/0x6d
       [<ffffffff81104086>] sysfs_deactivate+0x8c/0xc9
       [<ffffffff81104950>] sysfs_addrm_finish+0x31/0x5a
       [<ffffffff81104a33>] sysfs_remove_dir+0x75/0x88
       [<ffffffff811bafae>] kobject_del+0x16/0x37
       [<ffffffff811bb00d>] kobject_release+0x3e/0x66
       [<ffffffff811bbd71>] kref_put+0x43/0x4d
       [<ffffffff811baf29>] kobject_put+0x47/0x4b
       [<ffffffff812a39b2>] __cpufreq_remove_dev+0x1da/0x236
       [<ffffffff8136178e>] cpufreq_cpu_callback+0x62/0x7a
       [<ffffffff81048362>] notifier_call_chain+0x32/0x5e
       [<ffffffff810483ed>] __raw_notifier_call_chain+0x9/0xb
       [<ffffffff81351736>] _cpu_down+0x90/0x29e
       [<ffffffff810311d3>] disable_nonboot_cpus+0x6f/0x105
       [<ffffffff8103f845>] kernel_power_off+0x21/0x3b
       [<ffffffff8103facd>] sys_reboot+0x103/0x16a
       [<ffffffff81001eeb>] system_call_fastpath+0x16/0x1b

other info that might help us debug this:

4 locks held by halt/2291:
 #0:  (reboot_mutex){+.+.+.}, at: [<ffffffff8103fa5b>] sys_reboot+0x91/0x16a
 #1:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff81031102>] cpu_maps_update_begin+0x12/0x14
 #2:  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8103113d>] cpu_hotplug_begin+0x27/0x4e
 #3:  (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}, at: [<ffffffff812a3a92>] lock_policy_rwsem_write+0x4a/0x7b

stack backtrace:
Pid: 2291, comm: halt Not tainted 2.6.34-rc2-cachefs #115
Call Trace:
 [<ffffffff81052522>] print_circular_bug+0xae/0xbd
 [<ffffffff81053a59>] __lock_acquire+0xffa/0x16cd
 [<ffffffff81054183>] lock_acquire+0x57/0x6d
 [<ffffffff81104950>] ? sysfs_addrm_finish+0x31/0x5a
 [<ffffffff81044acf>] ? __init_waitqueue_head+0x35/0x46
 [<ffffffff81104086>] sysfs_deactivate+0x8c/0xc9
 [<ffffffff81104950>] ? sysfs_addrm_finish+0x31/0x5a
 [<ffffffff811044a3>] ? release_sysfs_dirent+0x9e/0xbe
 [<ffffffff81104950>] sysfs_addrm_finish+0x31/0x5a
 [<ffffffff81104a33>] sysfs_remove_dir+0x75/0x88
 [<ffffffff811bafae>] kobject_del+0x16/0x37
 [<ffffffff811bb00d>] kobject_release+0x3e/0x66
 [<ffffffff811bafcf>] ? kobject_release+0x0/0x66
 [<ffffffff811bbd71>] kref_put+0x43/0x4d
 [<ffffffff811baf29>] kobject_put+0x47/0x4b
 [<ffffffff812a39b2>] __cpufreq_remove_dev+0x1da/0x236
 [<ffffffff8136178e>] cpufreq_cpu_callback+0x62/0x7a
 [<ffffffff81048362>] notifier_call_chain+0x32/0x5e
 [<ffffffff810483ed>] __raw_notifier_call_chain+0x9/0xb
 [<ffffffff81351736>] _cpu_down+0x90/0x29e
 [<ffffffff810311d3>] disable_nonboot_cpus+0x6f/0x105
 [<ffffffff8103f845>] kernel_power_off+0x21/0x3b
 [<ffffffff8103facd>] sys_reboot+0x103/0x16a
 [<ffffffff813636d9>] ? do_nanosleep+0x78/0xb2
 [<ffffffff8104797d>] ? hrtimer_nanosleep+0xab/0x118
 [<ffffffff810473a6>] ? hrtimer_wakeup+0x0/0x21
 [<ffffffff81364e29>] ? retint_swapgs+0xe/0x13
 [<ffffffff81051e4e>] ? trace_hardirqs_on_caller+0x10c/0x130
 [<ffffffff8107496a>] ? audit_syscall_entry+0x17d/0x1b0
 [<ffffffff81364354>] ? trace_hardirqs_on_thunk+0x3a/0x3f
 [<ffffffff81001eeb>] system_call_fastpath+0x16/0x1b
Broke affinity for irq 4
lockdep: fixing up alternatives.
SMP alternatives: switching to UP code
Power down.
acpi_power_off called


David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ