[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2387162.Mp5gMh3zgJ@vostro.rjw.lan>
Date: Thu, 02 May 2013 02:53:13 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Toshi Kani <toshi.kani@...com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
isimatu.yasuaki@...fujitsu.com,
vasilis.liaskovitis@...fitbricks.com
Subject: Re: [PATCH 3/3 RFC] ACPI / hotplug: Use device offline/online for graceful hot-removal
On Wednesday, May 01, 2013 02:20:12 PM Toshi Kani wrote:
> On Wed, 2013-05-01 at 17:05 +0200, Rafael J. Wysocki wrote:
> > On Tuesday, April 30, 2013 05:49:38 PM Toshi Kani wrote:
> > > On Mon, 2013-04-29 at 14:29 +0200, Rafael J. Wysocki wrote:
> > > > From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> > > >
> > > > Modify the generic ACPI hotplug code to be able to check if devices
> > > > scheduled for hot-removal may be gracefully removed from the system
> > > > using the device offline/online mechanism introduced previously.
> > > >
> > > > Namely, make acpi_scan_hot_remove() which handles device hot-removal
> > > > call device_offline() for all physical companions of the ACPI device
> > > > nodes involved in the operation and check the results. If any of
> > > > the device_offline() calls fails, the function will not progress to
> > > > the removal phase (which cannot be aborted), unless its (new) force
> > > > argument is set (in case of a failing offline it will put the devices
> > > > offlined by it back online).
> > > >
> > > > In support of the 'forced' hot-removal, add a new sysfs attribute
> > > > 'force_remove' that will reside in every ACPI hotplug profile
> > > > present under /sys/firmware/acpi/hotplug/.
> > > >
> > > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> > > > ---
> > > > Documentation/ABI/testing/sysfs-firmware-acpi | 9 +-
> > > > drivers/acpi/internal.h | 2
> > > > drivers/acpi/scan.c | 97 ++++++++++++++++++++++++--
> > > > drivers/acpi/sysfs.c | 27 +++++++
> > > > include/acpi/acpi_bus.h | 3
> > > > 5 files changed, 131 insertions(+), 7 deletions(-)
> > > >
> > > :
> > > > Index: linux-pm/drivers/acpi/scan.c
> > > > ===================================================================
> > > > --- linux-pm.orig/drivers/acpi/scan.c
> > > > +++ linux-pm/drivers/acpi/scan.c
> > > > @@ -120,7 +120,61 @@ acpi_device_modalias_show(struct device
> > > > }
> > > > static DEVICE_ATTR(modalias, 0444, acpi_device_modalias_show, NULL);
> > > >
> > > > -static int acpi_scan_hot_remove(struct acpi_device *device)
> > > > +static acpi_status acpi_bus_offline_companions(acpi_handle handle, u32 lvl,
> > > > + void *data, void **ret_p)
> > > > +{
> > > > + struct acpi_device *device = NULL;
> > > > + struct acpi_device_physical_node *pn;
> > > > + bool force = *((bool *)data);
> > > > + acpi_status status = AE_OK;
> > > > +
> > > > + if (acpi_bus_get_device(handle, &device))
> > > > + return AE_OK;
> > > > +
> > > > + mutex_lock(&device->physical_node_lock);
> > > > +
> > > > + list_for_each_entry(pn, &device->physical_node_list, node) {
> > >
> > > I do not think physical_node_list is set for ACPI processor devices, so
> > > this code is NOP at this point. I think properly initializing
> > > physical_node_list for CPU and memblk is one of the key items in this
> > > approach.
> >
> > It surely is. :-)
> >
> > I've almost done that for CPUs, but that still requires some more work.
> > Hopefully, it'll be mostly done later this week.
>
> Cool!
>
> > Memory will take some more time I guess, though.
>
> Yes, memory has an ordering issue when using glue.c.
> https://lkml.org/lkml/2013/3/26/398
Well, that may not be such a big problem. I'll have a look at that later.
Thanks,
Rafael
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists