[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200212141832.lqmzzdi77hb6yrhu@pathway.suse.cz>
Date: Wed, 12 Feb 2020 15:18:32 +0100
From: Petr Mladek <pmladek@...e.com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dmitry Monakhov <dmtrmonakhov@...dex-team.ru>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Subject: Re: [PATCH] kernel/watchdog: flush all printk nmi buffers when
hardlockup detected
On Wed 2020-02-12 10:15:51, Sergey Senozhatsky wrote:
> On (20/02/10 12:48), Konstantin Khlebnikov wrote:
> >
> > In NMI context printk() could save messages into per-cpu buffers and
> > schedule flush by irq_work when IRQ are unblocked. This means message
> > about hardlockup appears in kernel log only when/if lockup is gone.
> >
> > Comment in irq_work_queue_on() states that remote IPI aren't NMI safe
> > thus printk() cannot schedule flush work to another cpu.
> >
> > This patch adds simple atomic counter of detected hardlockups and
> > flushes all per-cpu printk buffers in context softlockup watchdog
> > at any other cpu when it sees changes of this counter.
>
> Petr, could you remind me, why do we do PRINTK_NMI_DIRECT_CONTEXT_MASK
> only from ftrace?
There was a possible deadlock when printing backtraces from all CPUs.
The CPUs were serialized via a lock in nmi_cpu_backtrace(). One of
them might have been interrupted under logbuf_lock.
ftrace was needed because it printed too many messages. And it was
safe because the ftrace log was read from a single CPU without
any lock.
Best Regards,
Petr
Powered by blists - more mailing lists