[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121129113635.GC639@dhcp-192-168-178-175.profitbricks.localdomain>
Date: Thu, 29 Nov 2012 12:36:35 +0100
From: Vasilis Liaskovitis <vasilis.liaskovitis@...fitbricks.com>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: linux-acpi@...r.kernel.org, Toshi Kani <toshi.kani@...com>,
Hanjun Guo <guohanjun@...wei.com>,
isimatu.yasuaki@...fujitsu.com, wency@...fujitsu.com,
lenb@...nel.org, gregkh@...uxfoundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Tang Chen <tangchen@...fujitsu.com>
Subject: Re: [RFC PATCH v3 0/3] acpi: Introduce prepare_remove device
operation
On Thu, Nov 29, 2012 at 11:15:31AM +0100, Rafael J. Wysocki wrote:
> On Wednesday, November 28, 2012 11:41:36 AM Toshi Kani wrote:
> > On Wed, 2012-11-28 at 19:05 +0800, Hanjun Guo wrote:
> > > We met the same problem when we doing computer node hotplug, It is a good idea
> > > to introduce prepare_remove before actual device removal.
> > >
> > > I think we could do more in prepare_remove, such as rollback. In most cases, we can
> > > offline most of memory sections except kernel used pages now, should we rollback
> > > and online the memory sections when prepare_remove failed ?
> >
> > I think hot-plug operation should have all-or-nothing semantics. That
> > is, an operation should either complete successfully, or rollback to the
> > original state.
>
> That's correct.
>
> > > As you may know, the ACPI based hotplug framework we are working on already addressed
> > > this problem, and the way we slove this problem is a bit like yours.
> > >
> > > We introduce hp_ops in struct acpi_device_ops:
> > > struct acpi_device_ops {
> > > acpi_op_add add;
> > > acpi_op_remove remove;
> > > acpi_op_start start;
> > > acpi_op_bind bind;
> > > acpi_op_unbind unbind;
> > > acpi_op_notify notify;
> > > #ifdef CONFIG_ACPI_HOTPLUG
> > > struct acpihp_dev_ops *hp_ops;
> > > #endif /* CONFIG_ACPI_HOTPLUG */
> > > };
> > >
> > > in hp_ops, we divide the prepare_remove into six small steps, that is:
> > > 1) pre_release(): optional step to mark device going to be removed/busy
> > > 2) release(): reclaim device from running system
> > > 3) post_release(): rollback if cancelled by user or error happened
> > > 4) pre_unconfigure(): optional step to solve possible dependency issue
> > > 5) unconfigure(): remove devices from running system
> > > 6) post_unconfigure(): free resources used by devices
> > >
> > > In this way, we can easily rollback if error happens.
> > > How do you think of this solution, any suggestion ? I think we can achieve
> > > a better way for sharing ideas. :)
> >
> > Yes, sharing idea is good. :) I do not know if we need all 6 steps (I
> > have not looked at all your changes yet..), but in my mind, a hot-plug
> > operation should be composed with the following 3 phases.
> >
> > 1. Validate phase - Verify if the request is a supported operation. All
> > known restrictions are verified at this phase. For instance, if a
> > hot-remove request involves kernel memory, it is failed in this phase.
> > Since this phase makes no change, no rollback is necessary to fail.
>
> Actually, we can't do it this way, because the conditions may change between
> the check and the execution. So the first phase needs to involve execution
> to some extent, although only as far as it remains reversible.
>
> > 2. Execute phase - Perform hot-add / hot-remove operation that can be
> > rolled-back in case of error or cancel.
>
> I would just merge 1 and 2.
I agree steps 1 and 2 can be merged, at least for the current ACPI framework.
E.g. for memory hotplug, the mm function we call for memory removal
(remove_memory) handles both these steps.
The new ACPI framework could perhaps expand the operations as Hanjun described,
if it makes sense.
thanks,
- Vasilis
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists