[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <535E2C5A.9090702@linaro.org>
Date: Mon, 28 Apr 2014 11:24:26 +0100
From: Daniel Thompson <daniel.thompson@...aro.org>
To: Steven Rostedt <rostedt@...dmis.org>
CC: kgdb-bugreport@...ts.sourceforge.net,
Jason Wessel <jason.wessel@...driver.com>, patches@...aro.org,
linaro-kernel@...ts.linaro.org, linux-kernel@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jiri Slaby <jslaby@...e.cz>,
Frederic Weisbecker <fweisbec@...il.com>,
Ingo Molnar <mingo@...hat.com>,
John Stultz <john.stultz@...aro.org>,
Anton Vorontsov <anton.vorontsov@...aro.org>,
Colin Cross <ccross@...roid.com>, kernel-team@...roid.com
Subject: Re: [RFC v3 1/9] sysrq: Implement __handle_sysrq_nolock to avoid
recursive locking in kdb
On 25/04/14 17:45, Steven Rostedt wrote:
> On Fri, 25 Apr 2014 17:29:22 +0100
> Daniel Thompson <daniel.thompson@...aro.org> wrote:
>
>> If kdb is triggered using SysRq-g then any use of the sr command results
>> in the SysRq key table lock being recursively acquired, killing the debug
>> session. That patch resolves the problem by introducing a _nolock
>> alternative for __handle_sysrq.
>>
>> Strictly speaking this approach risks racing on the key table when kdb is
>> triggered by something other than SysRq-g however in that case any other
>> CPU involved should release the spin lock before kgdb parks the slave
>> CPUs.
>
> Is that case documented somewhere in the code comments?
Perhaps not near enough to the _nolock but the primary bit of comment is
here (and in same file as kdb_sr).
--- cut here ---
* kdb_main_loop - After initial setup and assignment of the
* controlling cpu, all cpus are in this loop. One cpu is in
* control and will issue the kdb prompt, the others will spin
* until 'go' or cpu switch.
--- cut here ---
The mechanism kgdb uses to quiesce other CPUs means other CPUs cannot be
in irqsave critical sections.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists