[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5357E651.2040400@intel.com>
Date: Wed, 23 Apr 2014 18:12:01 +0200
From: "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>
To: Tejun Heo <tj@...nel.org>
CC: Li Zhong <zhong@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>, gregkh@...uxfoundation.org,
toshi.kani@...com
Subject: Re: [RFC PATCH v5 2/2] Use kernfs_break_active_protection() for device
online store callbacks
On 4/23/2014 4:23 PM, Tejun Heo wrote:
> Hello, Rafael.
Hi,
> On Wed, Apr 23, 2014 at 12:21:33AM +0200, Rafael J. Wysocki wrote:
>> Can you please elaborate a bit?
> Because it can get involved in larger locking dependency issues by
> joining dependency graphs of two otherwise largely disjoint
> subsystems. It has potential to create possible deadlocks which don't
> need to exist.
Well, I do my best not to add unnecessary locks if that's what you mean.
>> It is there to protect hotplug operations involving multiple devices
>> (in different subsystems) from racing with each other. Why exactly
>> is it bad?
> But why would different subsystems, say cpu and memory, use the same
> lock? Wouldn't those subsystems already have proper locking inside
> their own subsystems?
That locking is not sufficient.
> Why add this additional global lock across multiple subsystems?
That basically is because of how eject works when it is triggered via ACPI.
It is signaled for a device at the top of a subtree. It may be a
container of some sort and the eject involves everything below that
device in the ACPI namespace. That may involve multiple subsystem
(CPUs, memory, PCI host bridge, etc.).
We do that in two steps, offline (which can fail) and eject proper
(which can't fail and makes all of the involved devices go away). All
that has to be done in one go with respect to the sysfs-triggered
offline/online and that's why the lock is there.
Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists