[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMnenOBTUclLld9i@alley>
Date: Wed, 16 Jun 2021 13:21:00 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org,
Stephen Rothwell <sfr@...b.auug.org.au>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Stephen Boyd <swboyd@...omium.org>,
Alexander Potapenko <glider@...gle.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH next v3 1/2] dump_stack: move cpu lock to printk.c
On Wed 2021-06-16 09:35:35, John Ogness wrote:
> On 2021-06-16, Sergey Senozhatsky <senozhatsky@...omium.org> wrote:
> It isn't about limiting. It is about tracking. The current dump_stack()
> handles it correctly because the tracking is done in the stack frame of
> the caller (in @was_locked of dump_stack_lvl()). My previous versions
> also handled it correctly by using the same technique.
>
> With this series version I moved the tracking into a global variable
> @printk_cpulock_nested, which is fine, except that a boolean is not
> capable of tracking more than 1 nesting. Which means that
> __printk_cpu_unlock() would release cpu lock ownership too soon.
>
> Doing this correctly is a simple change:
>
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index e67dc510fa1b..5376216e4f3d 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -3535,7 +3535,7 @@ EXPORT_SYMBOL_GPL(kmsg_dump_rewind);
>
> #ifdef CONFIG_SMP
> static atomic_t printk_cpulock_owner = ATOMIC_INIT(-1);
> -static bool printk_cpulock_nested;
> +static atomic_t printk_cpulock_nested = ATOMIC_INIT(0);
>
> /**
> * __printk_wait_on_cpu_lock() - Busy wait until the printk cpu-reentrant
> @@ -3596,7 +3598,7 @@ int __printk_cpu_trylock(void)
>
> } else if (old == cpu) {
> /* This CPU is already the owner. */
> - printk_cpulock_nested = true;
> + atomic_inc(&printk_cpulock_nested);
> return 1;
> }
>
> @@ -3613,8 +3615,8 @@ EXPORT_SYMBOL(__printk_cpu_trylock);
> */
> void __printk_cpu_unlock(void)
> {
> - if (printk_cpulock_nested) {
> - printk_cpulock_nested = false;
> + if (atomic_read(&printk_cpulock_nested)) {
> + atomic_dec(&printk_cpulock_nested);
I think about handling printk_cpulock_nested with only one
atomic operation. Something like:
if (atomic_dec_return(&printk_cpulock_level) == 0)
atomic_set_release(&printk_cpulock_owner, -1);
It would require always incremanting the number in lock, e.g.
old = atomic_cmpxchg(&printk_cpulock_owner, -1, cpu);
if (old == -1 || old == cpu) {
atomic_inc(&printk_cpulock_level);
return 1;
}
But I am not sure if it is really better. Feel free to keep
your variant.
> return;
> }
>
> > Shall this be a separate patch?
>
> I would prefer a v4 because I also noticed that this patch accidentally
> implements atomic_set_release() instead of moving over the atomit_set()
> from dump_stack(). That also needs to be corrected, otherwise the next
> patch in the series makes no sense.
Yes, this needs to get fixed as well.
Otherwise, the patch looks good to me. I haven't found any other
problems, except for the two already mentioned (count nested levels,
introduce atomic_set_release() in 2nd patch).
Best Regards,
Petr
Powered by blists - more mailing lists