[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6aef1204-4564-bc86-b6cf-3e5360a5b5f1@linaro.org>
Date: Mon, 7 Nov 2016 10:07:36 +0000
From: Daniel Thompson <daniel.thompson@...aro.org>
To: Petr Mladek <pmladek@...e.com>,
Jason Wessel <jason.wessel@...driver.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] kdb: Properly synchronize vkdb_printf() calls with
other CPUs
On 21/10/16 13:50, Petr Mladek wrote:
> kdb_printf_lock does not prevent other CPUs from entering the critical
> section because it is ignored when KDB_STATE_PRINTF_LOCK is set.
>
> The problematic situation might look like:
>
> ...
>
> Signed-off-by: Petr Mladek <pmladek@...e.com>
> ---
> kernel/debug/kdb/kdb_io.c | 36 +++++++++++++++++-------------------
> kernel/debug/kdb/kdb_private.h | 1 -
> 2 files changed, 17 insertions(+), 20 deletions(-)
>
> diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
> index fc1ef736253c..227b59ec7dbe 100644
> --- a/kernel/debug/kdb/kdb_io.c
> +++ b/kernel/debug/kdb/kdb_io.c
> @@ -555,16 +555,16 @@ int vkdb_printf(enum kdb_msgsrc src, const char *fmt, va_list ap)
> int colcount;
> int logging, saved_loglevel = 0;
> int saved_trap_printk;
> - int got_printf_lock = 0;
> int retlen = 0;
> int fnd, len;
> + int this_cpu, old_cpu;
> + static int kdb_printf_cpu = -1;
> char *cp, *cp2, *cphold = NULL, replaced_byte = ' ';
> char *moreprompt = "more> ";
> struct console *c = console_drivers;
> - static DEFINE_SPINLOCK(kdb_printf_lock);
> unsigned long uninitialized_var(flags);
>
> - preempt_disable();
> + local_irq_save(flags);
> saved_trap_printk = kdb_trap_printk;
> kdb_trap_printk = 0;
>
> @@ -572,13 +572,14 @@ int vkdb_printf(enum kdb_msgsrc src, const char *fmt, va_list ap)
> * But if any cpu goes recursive in kdb, just print the output,
> * even if it is interleaved with any other text.
> */
> - if (!KDB_STATE(PRINTF_LOCK)) {
> - KDB_STATE_SET(PRINTF_LOCK);
> - spin_lock_irqsave(&kdb_printf_lock, flags);
> - got_printf_lock = 1;
> - atomic_inc(&kdb_event);
> - } else {
> - __acquire(kdb_printf_lock);
> + this_cpu = smp_processor_id();
> + atomic_inc(&kdb_event);
When reviewing I noticed that, when we recursively enter, kdb_event is
handled differently after this patch so I wanted to figure out if this
alternative handling of kdb_event was safe.
In the end I concluded it is safe but that's mostly because the *only*
thing we ever seem to do with kdb_event is increment and decrement it.
So perhaps adding another patch at the front of this series to nuke this
variable would be worthwhile (whilst making this patch easier to read).
However, your choice and, either way:
Reviewed-by: Daniel Thompson <daniel.thompson@...aro.org>
Daniel.
Powered by blists - more mailing lists