lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6599ad830807021531r16013460re28f813be8293d6c@mail.gmail.com>
Date:	Wed, 2 Jul 2008 15:31:59 -0700
From:	"Paul Menage" <menage@...gle.com>
To:	"Max Krasnyansky" <maxk@...lcomm.com>
Cc:	a.p.zijlstra@...llo.nl, pj@....com, vegard.nossum@...il.com,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH] CGroups: Add a per-subsystem hierarchy lock

On Tue, Jul 1, 2008 at 8:55 PM, Max Krasnyansky <maxk@...lcomm.com> wrote:
> I was about to say "yeah, looks good" and then tried a couple of
> different hot-plug scenarious.
> We still have circular locking even with your patch
>

What sequence of actions do you do? I've not been able to reproduce a
lockdep failure.

Paul

> [ INFO: possible circular locking dependency detected ]
> 2.6.26-rc8 #4
> -------------------------------------------------------
> bash/2779 is trying to acquire lock:
>  (&cpu_hotplug.lock){--..}, at: [<ffffffff8025e024>] get_online_cpus+0x24/0x40
>
> but task is already holding lock:
>  (sched_domains_mutex){--..}, at: [<ffffffff8022f5e9>]
> partition_sched_domains+0x29/0x2b0
>
> which lock already depends on the new lock.
>
> the existing dependency chain (in reverse order) is:
>
> -> #2 (sched_domains_mutex){--..}:
>       [<ffffffff8025988f>] __lock_acquire+0x9cf/0xe50
>       [<ffffffff80259d6b>] lock_acquire+0x5b/0x80
>       [<ffffffff804d12c4>] mutex_lock_nested+0x94/0x250
>       [<ffffffff8022f5e9>] partition_sched_domains+0x29/0x2b0
>       [<ffffffff80268f9d>] rebuild_sched_domains+0x9d/0x3f0
>       [<ffffffff80269f05>] cpuset_handle_cpuhp+0x205/0x220
>       [<ffffffff804d688f>] notifier_call_chain+0x3f/0x80
>       [<ffffffff80250679>] __raw_notifier_call_chain+0x9/0x10
>       [<ffffffff804c1748>] _cpu_down+0xa8/0x290
>       [<ffffffff804c196b>] cpu_down+0x3b/0x60
>       [<ffffffff804c2c68>] store_online+0x48/0xa0
>       [<ffffffff803a46c4>] sysdev_store+0x24/0x30
>       [<ffffffff802eebba>] sysfs_write_file+0xca/0x140
>       [<ffffffff8029cb3b>] vfs_write+0xcb/0x170
>       [<ffffffff8029ccd0>] sys_write+0x50/0x90
>       [<ffffffff8020b92b>] system_call_after_swapgs+0x7b/0x80
>       [<ffffffffffffffff>] 0xffffffffffffffff
>
> -> #1 (&ss->hierarchy_mutex){--..}:
>       [<ffffffff8025988f>] __lock_acquire+0x9cf/0xe50
>       [<ffffffff80259d6b>] lock_acquire+0x5b/0x80
>       [<ffffffff804d12c4>] mutex_lock_nested+0x94/0x250
>       [<ffffffff80269d39>] cpuset_handle_cpuhp+0x39/0x220
>       [<ffffffff804d688f>] notifier_call_chain+0x3f/0x80
>       [<ffffffff80250679>] __raw_notifier_call_chain+0x9/0x10
>       [<ffffffff804c1748>] _cpu_down+0xa8/0x290
>       [<ffffffff804c196b>] cpu_down+0x3b/0x60
>       [<ffffffff804c2c68>] store_online+0x48/0xa0
>       [<ffffffff803a46c4>] sysdev_store+0x24/0x30
>       [<ffffffff802eebba>] sysfs_write_file+0xca/0x140
>       [<ffffffff8029cb3b>] vfs_write+0xcb/0x170
>       [<ffffffff8029ccd0>] sys_write+0x50/0x90
>       [<ffffffff8020b92b>] system_call_after_swapgs+0x7b/0x80
>       [<ffffffffffffffff>] 0xffffffffffffffff
>
> -> #0 (&cpu_hotplug.lock){--..}:
>       [<ffffffff80259913>] __lock_acquire+0xa53/0xe50
>       [<ffffffff80259d6b>] lock_acquire+0x5b/0x80
>       [<ffffffff804d12c4>] mutex_lock_nested+0x94/0x250
>       [<ffffffff8025e024>] get_online_cpus+0x24/0x40
>       [<ffffffff8022fee1>] sched_getaffinity+0x11/0x80
>       [<ffffffff8026e6d9>] __synchronize_sched+0x19/0x90
>       [<ffffffff8022ed46>] detach_destroy_domains+0x46/0x50
>       [<ffffffff8022f6b9>] partition_sched_domains+0xf9/0x2b0
>       [<ffffffff80268f9d>] rebuild_sched_domains+0x9d/0x3f0
>       [<ffffffff8026a858>] cpuset_common_file_write+0x2b8/0x5c0
>       [<ffffffff8026657c>] cgroup_file_write+0x7c/0x1a0
>       [<ffffffff8029cb3b>] vfs_write+0xcb/0x170
>       [<ffffffff8029ccd0>] sys_write+0x50/0x90
>       [<ffffffff8020b92b>] system_call_after_swapgs+0x7b/0x80
>       [<ffffffffffffffff>] 0xffffffffffffffff
>
> other info that might help us debug this:
>
> 2 locks held by bash/2779:
>  #0:  (cgroup_mutex){--..}, at: [<ffffffff802653e2>] cgroup_lock+0x12/0x20
>  #1:  (sched_domains_mutex){--..}, at: [<ffffffff8022f5e9>]
> partition_sched_domains+0x29/0x2b0
>
> stack backtrace:
> Pid: 2779, comm: bash Not tainted 2.6.26-rc8 #4
>
> Call Trace:
>  [<ffffffff80258c0c>] print_circular_bug_tail+0x8c/0x90
>  [<ffffffff802589c4>] ? print_circular_bug_entry+0x54/0x60
>  [<ffffffff80259913>] __lock_acquire+0xa53/0xe50
>  [<ffffffff8025e024>] ? get_online_cpus+0x24/0x40
>  [<ffffffff80259d6b>] lock_acquire+0x5b/0x80
>  [<ffffffff8025e024>] ? get_online_cpus+0x24/0x40
>  [<ffffffff804d12c4>] mutex_lock_nested+0x94/0x250
>  [<ffffffff8025867d>] ? mark_held_locks+0x4d/0x90
>  [<ffffffff8025e024>] get_online_cpus+0x24/0x40
>  [<ffffffff8022fee1>] sched_getaffinity+0x11/0x80
>  [<ffffffff8026e6d9>] __synchronize_sched+0x19/0x90
>  [<ffffffff8022ed46>] detach_destroy_domains+0x46/0x50
>  [<ffffffff8022f6b9>] partition_sched_domains+0xf9/0x2b0
>  [<ffffffff80258801>] ? trace_hardirqs_on+0xc1/0xe0
>  [<ffffffff80268f9d>] rebuild_sched_domains+0x9d/0x3f0
>  [<ffffffff8026a858>] cpuset_common_file_write+0x2b8/0x5c0
>  [<ffffffff80268c00>] ? cpuset_test_cpumask+0x0/0x20
>  [<ffffffff80269f20>] ? cpuset_change_cpumask+0x0/0x20
>  [<ffffffff80265260>] ? started_after+0x0/0x50
>  [<ffffffff8026657c>] cgroup_file_write+0x7c/0x1a0
>  [<ffffffff8029cb3b>] vfs_write+0xcb/0x170
>  [<ffffffff8029ccd0>] sys_write+0x50/0x90
>  [<ffffffff8020b92b>] system_call_after_swapgs+0x7b/0x80
>
> CPU3 attaching NULL sched-domain.
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ