[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150323140041.GQ8656@n2100.arm.linux.org.uk>
Date: Mon, 23 Mar 2015 14:00:41 +0000
From: Russell King - ARM Linux <linux@....linux.org.uk>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Nicolas Pitre <nico@...xnic.net>,
Mark Rutland <mark.rutland@....com>,
Krzysztof Kozlowski <k.kozlowski@...sung.com>,
Arnd Bergmann <arnd@...db.de>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Catalin Marinas <catalin.marinas@....com>,
Stephen Boyd <sboyd@...eaurora.org>,
linux-kernel@...r.kernel.org, Will Deacon <will.deacon@....com>,
linux-arm-kernel@...ts.infradead.org,
Marek Szyprowski <m.szyprowski@...sung.com>
Subject: Re: [PATCH v2] ARM: Don't use complete() during __cpu_die
On Mon, Mar 23, 2015 at 06:21:33AM -0700, Paul E. McKenney wrote:
> On Mon, Mar 23, 2015 at 12:55:45PM +0000, Russell King - ARM Linux wrote:
> > What if the cpu_hotplug_state for the CPU changes between reading it
> > testing its value, and then writing it, or do we guarantee that it
> > can't change other than by this function here? If it can't change,
> > then what's the point of using atomic_t for this?
>
> It indeed cannot change -here-, but there are other situations where
> more than one CPU can be attempting to change it at the same time.
> The reason that it cannot change here is that the variable has the
> value that indicates that the previous offline completed within the
> timeout period, so there are no outstanding writes.
>
> When going offline, the outgoing CPU might take so long leaving that
> the surviving CPU times out, thus both CPUs write to the variable at
> the same time, which by itself requires atomics. If this seems
> unlikely, consider virtualized environments where the hypervisor
> might preempt the guest OS.
If two CPUs write using atomic_set() to the same location, what value do
you end up with? It's just the same as if two CPUs write normally to the
same location.
The only way it can be different is if you use atomic_cmpxchg(), or
cmpxchg() so you can atomically test that the original value is what's
expected prior to writing it.
To illustrate this (I'll abbreviate the per-cpu state):
CPU0 CPU1
oldstate = atomic_read(state);
if (oldstate == CPU_DEAD) /* it isn't */
atomic_cmpxchg(state, oldstate, CPU_BROKEN)
atomic_xchg(state, CPU_DEAD)
/* succeeds */
!= old_state /* fails */
... so doesn't loop
cpu_wait_death() returns false (failed)
Here, we report that the CPU failed to go offline, but the state is set
to CPU_DEAD. If it had been slightly earlier, CPU0 would have updated
it to CPU_DEAD_POST.
If we instead got rid of this atomic stuff, and looked at the code
without atomic_* stuff confusing the picture, we'd probably have seen
that. This is why, whenever I see atomic_read() and atomic_set(), I'm
very suspicious that the code is correct; virtually every time I've
seen this pattern, the code has always had some problem. I utterly
hate code which uses these functions.
So, I'd suggest getting rid of atomic_read() and atomic_set() here
converting atomic_cmpxchg() to a plain cmpxchg(), and replacing that
atomic_xchg() with a similar call to cmpxchg() so that a broken CPU
doesn't confusingly end up being reported as broken yet getting a
CPU_DEAD state.
After all, we know that a CPU going down _should_ be online before it
goes down - if it isn't online, how can it be executing code. :)
--
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists