[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120627230823.GU15811@google.com>
Date: Wed, 27 Jun 2012 16:08:23 -0700
From: Tejun Heo <tj@...nel.org>
To: Glauber Costa <glommer@...allels.com>
Cc: Cgroups <cgroups@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: "Regression" with cd3d09527537
On Tue, Jun 26, 2012 at 04:43:03PM +0400, Glauber Costa wrote:
> Hi,
>
> I've recently started seeing a lockdep warning at the end of *every*
> "init 0" issued in my machine. Actually, reboots are fine, and
> that's probably why I've never seen it earlier. The log is quite
> extensively, but shows the following dependency chain:
>
> [ 83.982111] -> #4 (cpu_hotplug.lock){+.+.+.}:
> [...]
> [ 83.982111] -> #3 (jump_label_mutex){+.+...}:
> [...]
> [ 83.982111] -> #2 (sk_lock-AF_INET){+.+.+.}:
> [...]
> [ 83.982111] -> #1 (&sig->cred_guard_mutex){+.+.+.}:
> [...]
> [ 83.982111] -> #0 (cgroup_mutex){+.+.+.}:
>
> I've recently fixed bugs with the lock ordering imposed by cpusets
> on cpu_hotplug.lock through jump_label_mutex, and initially thought
> it to be the same kind of issue. But that was not the case.
>
> I've omitted the full backtrace for readability, but I run this with
> all cgroups disabled but the cpuset, so it can't be sock memcg
> (after my initial reaction of "oh, fuck, not again"). That
> jump_label is there for years, and it comes from the code that
> disables socket timestamps.
> (net_enable_timestamp)
Yeah, there are multiple really large locks at play here - jump label,
threadgroup and cgroup_mutex. It isn't pretty. Can you please post
the full lockdep dump? The above only shows single locking chain.
I'd like to see the other.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists