[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190906142235.GY2386@hirez.programming.kicks-ass.net>
Date: Fri, 6 Sep 2019 16:22:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Petr Mladek <pmladek@...e.com>
Cc: Andrea Parri <andrea.parri@...rulasolutions.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Brendan Higgins <brendanhiggins@...gle.com>,
John Ogness <john.ogness@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v4 0/9] printk: new ringbuffer implementation
On Fri, Sep 06, 2019 at 04:01:26PM +0200, Peter Zijlstra wrote:
> On Fri, Sep 06, 2019 at 02:42:11PM +0200, Petr Mladek wrote:
> > 7. People would complain when continuous lines become less
> > reliable. It might be most visible when mixing backtraces
> > from all CPUs. Simple sorting by prefix will not make
> > it readable. The historic way was to synchronize CPUs
> > by a spin lock. But then the cpu_lock() could cause
> > deadlock.
>
> Why? I'm running with that thing on, I've never seen a deadlock ever
> because of it. In fact, i've gotten output that is plain impossible with
> the current junk.
>
> The cpu-lock is inside the all-backtrace spinlock, not outside. And as I
> said yesterday, only the lockless console has any wait-loops while
> holding the cpu-lock. It _will_ make progress.
Oooh, I think I see. So one solution would be to pass the NMI along in
chain like. Send it to a single CPU at a time, when finished, send it
to the next.
Powered by blists - more mailing lists