[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <537F6821.6050104@linux.vnet.ibm.com>
Date: Fri, 23 May 2014 20:54:17 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: Frederic Weisbecker <fweisbec@...il.com>
CC: peterz@...radead.org, tglx@...utronix.de, mingo@...nel.org,
tj@...nel.org, rusty@...tcorp.com.au, akpm@...ux-foundation.org,
hch@...radead.org, mgorman@...e.de, riel@...hat.com, bp@...e.de,
rostedt@...dmis.org, mgalbraith@...e.de, ego@...ux.vnet.ibm.com,
paulmck@...ux.vnet.ibm.com, oleg@...hat.com, rjw@...ysocki.net,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 2/3] CPU hotplug, stop-machine: Plug race-window that
leads to "IPI-to-offline-CPU"
On 05/23/2014 08:34 PM, Frederic Weisbecker wrote:
> On Fri, May 23, 2014 at 08:15:35PM +0530, Srivatsa S. Bhat wrote:
>> On 05/23/2014 06:52 PM, Frederic Weisbecker wrote:
>>> On Fri, May 23, 2014 at 03:42:20PM +0530, Srivatsa S. Bhat wrote:
>>>> During CPU offline, stop-machine is used to take control over all the online
>>>> CPUs (via the per-cpu stopper thread) and then run take_cpu_down() on the CPU
>>>> that is to be taken offline.
>>>>
[...]
>>>> kernel/stop_machine.c | 39 ++++++++++++++++++++++++++++++++++-----
>>>> 1 file changed, 34 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
>>>> index 01fbae5..288f7fe 100644
>>>> --- a/kernel/stop_machine.c
>>>> +++ b/kernel/stop_machine.c
>>>> @@ -130,8 +130,10 @@ enum multi_stop_state {
>>>> MULTI_STOP_NONE,
>>>> /* Awaiting everyone to be scheduled. */
>>>> MULTI_STOP_PREPARE,
>>>> - /* Disable interrupts. */
>>>> - MULTI_STOP_DISABLE_IRQ,
>>>> + /* Disable interrupts on CPUs not in ->active_cpus mask. */
>>>> + MULTI_STOP_DISABLE_IRQ_INACTIVE,
>>>> + /* Disable interrupts on CPUs in ->active_cpus mask. */
>>>> + MULTI_STOP_DISABLE_IRQ_ACTIVE,
>>>> /* Run the function */
>>>> MULTI_STOP_RUN,
>>>> /* Exit */
>>>> @@ -189,12 +191,39 @@ static int multi_cpu_stop(void *data)
>>>> do {
>>>> /* Chill out and ensure we re-read multi_stop_state. */
>>>> cpu_relax();
>>>> +
>>>> + /*
>>>> + * We use 2 separate stages to disable interrupts, namely
>>>> + * _INACTIVE and _ACTIVE, to ensure that the inactive CPUs
>>>> + * disable their interrupts first, followed by the active CPUs.
>>>> + *
>>>> + * This is done to avoid a race in the CPU offline path, which
>>>> + * can lead to receiving IPIs on the outgoing CPU *after* it
>>>> + * has gone offline.
>>>> + *
>>>> + * During CPU offline, we don't want the other CPUs to send
>>>> + * IPIs to the active_cpu (the outgoing CPU) *after* it has
>>>> + * disabled interrupts (because, then it will notice the IPIs
>>>> + * only after it has gone offline). We can prevent this by
>>>> + * making the other CPUs disable their interrupts first - that
>>>> + * way, they will run the stop-machine code with interrupts
>>>> + * disabled, and hence won't send IPIs after that point.
>>>> + */
>>>> +
>>>> if (msdata->state != curstate) {
>>>> curstate = msdata->state;
>>>> switch (curstate) {
>>>> - case MULTI_STOP_DISABLE_IRQ:
>>>> - local_irq_disable();
>>>> - hard_irq_disable();
>>>> + case MULTI_STOP_DISABLE_IRQ_INACTIVE:
>>>> + if (!is_active) {
>>>> + local_irq_disable();
>>>> + hard_irq_disable();
>>>> + }
>>>> + break;
>>>> + case MULTI_STOP_DISABLE_IRQ_ACTIVE:
>>>> + if (is_active) {
>>>> + local_irq_disable();
>>>> + hard_irq_disable();
>>>> + }
>>>
>>> Do we actually need that now that we are flushing the ipi queue on CPU dying?
>>>
>>
>> Yes, we do. Flushing the IPI queue is one thing - it guarantees that a CPU
>> doesn't go offline without finishing its work. Not receiving IPIs after going
>> offline is a different thing - it helps avoid warnings from the IPI handling
>> code (although it will be harmless if the queue had been flushed earlier).
>
> I'm confused. Perhaps I don't understand well how things mix up. How does it avoid the warning.
> Isn't there still a risk that some IPI don't fire due to hardware latency.
>
> I mean either we do:
>
> local_irq_enable()
> wait_for_pending_ipi()
> local_irq_disable()
>
> Or we do
>
> hotplug_cpu_down {
> flush_ipi()
> }
>
> But something in between looks broken:
>
> local_irq_disable()
> local_irq_enable()
>
> flush_ipi()
>
>
>>
>> So I think it is good to have both, so that we can keep CPU offline very
>> clean - no pending work left around, as well as no possibility of (real or
>> spurious) warnings.
>
> Ah may be what you want to avoid is this:
>
> CPU 0 CPU 1
> -------------------------
>
> send IPI to 1
>
> flush_ipi()
> set_cpu_offline()
> get_ipi()
> //get late IPI but queue is flushed already
> smp_single_function_interrupt() {
> WARN()
>
> Yeah but still, your patch doesn't deal with late hardware IPI.
> How about we move the warning to the IPI callback iterator:
>
> - WARN_ON_ONCE(cpu_is_offline())
>
> llist_for_each(...) {
> + WARN_ON_ONCE(cpu_is_offline())
> csd->func()
> }
>
> Since what matters is that all functions are executed before going offline.
>
Right, we can't do anything about getting late IPIs, but we need to warn
only if there really was work to be done and the CPU is already offline.
What you suggested above will take care of that. I'll incorporate this in
an updated patch, thank you!
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists