[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1809051419580.1416@nanos.tec.linutronix.de>
Date: Wed, 5 Sep 2018 14:23:46 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Neeraj Upadhyay <neeraju@...eaurora.org>
cc: josh@...htriplett.org, peterz@...radead.org, mingo@...nel.org,
jiangshanlai@...il.com, dzickus@...hat.com,
brendan.jackman@....com, malat@...ian.org,
linux-kernel@...r.kernel.org, sramana@...eaurora.org,
linux-arm-msm@...r.kernel.org
Subject: Re: [PATCH] cpu/hotplug: Fix rollback during error-out in
takedown_cpu()
On Wed, 5 Sep 2018, Thomas Gleixner wrote:
> On Tue, 4 Sep 2018, Neeraj Upadhyay wrote:
> > ret = cpuhp_down_callbacks(cpu, st, target);
> > if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
> > - cpuhp_reset_state(st, prev_state);
> > + /*
> > + * As st->last is not set, cpuhp_reset_state() increments
> > + * st->state, which results in CPUHP_AP_SMPBOOT_THREADS being
> > + * skipped during rollback. So, don't use it here.
> > + */
> > + st->rollback = true;
> > + st->target = prev_state;
> > + st->bringup = !st->bringup;
>
> No, this is just papering over the actual problem.
>
> The state inconsistency happens in take_cpu_down() when it returns with a
> failure from __cpu_disable() because that returns with state = TEARDOWN_CPU
> and st->state is then incremented in undo_cpu_down().
>
> That's the real issue and we need to analyze the whole cpu_down rollback
> logic first.
And looking closer this is a general issue. Just that the TEARDOWN state
makes it simple to observe. It's universaly broken, when the first teardown
callback fails because, st->state is only decremented _AFTER_ the callback
returns success, but undo_cpu_down() increments unconditionally.
Patch below.
Thanks,
tglx
----
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -916,7 +916,8 @@ static int cpuhp_down_callbacks(unsigned
ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
if (ret) {
st->target = prev_state;
- undo_cpu_down(cpu, st);
+ if (st->state < prev_state)
+ undo_cpu_down(cpu, st);
break;
}
}
@@ -969,7 +970,7 @@ static int __ref _cpu_down(unsigned int
* to do the further cleanups.
*/
ret = cpuhp_down_callbacks(cpu, st, target);
- if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
+ if (ret && st->state == CPUHP_TEARDOWN_CPU && st->state < prev_state) {
cpuhp_reset_state(st, prev_state);
__cpuhp_kick_ap(st);
}
Powered by blists - more mailing lists