[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150205161100.GQ8656@n2100.arm.linux.org.uk>
Date: Thu, 5 Feb 2015 16:11:00 +0000
From: Russell King - ARM Linux <linux@....linux.org.uk>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Krzysztof Kozlowski <k.kozlowski@...sung.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Arnd Bergmann <arnd@...db.de>,
Mark Rutland <mark.rutland@....com>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Stephen Boyd <sboyd@...eaurora.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>
Subject: Re: [PATCH v2] ARM: Don't use complete() during __cpu_die
On Thu, Feb 05, 2015 at 06:29:18AM -0800, Paul E. McKenney wrote:
> Works for me, assuming no hidden uses of RCU in the IPI code. ;-)
Sigh... I kind'a new it wouldn't be this simple. The gic code which
actually raises the IPI takes a raw spinlock, so it's not going to be
this simple - there's a small theoretical window where we have taken
this lock, written the register to send the IPI, and then dropped the
lock - the update to the lock to release it could get lost if the
CPU power is quickly cut at that point.
Also, we _do_ need the second cache flush in place to ensure that the
unlock is seen to other CPUs.
We could work around that by taking and releasing the lock in the IPI
processing function... but this is starting to look less attractive
as the lock is private to irq-gic.c.
Well, we're very close to 3.19, we're too close to be trying to sort
this out, so I'm hoping that your changes which cause this RCU error
are *not* going in during this merge window, because we seem to have
something of a problem right now which needs more time to resolve.
--
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists