[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150611095705.GF9409@pathway.suse.cz>
Date: Thu, 11 Jun 2015 11:57:05 +0200
From: Petr Mladek <pmladek@...e.cz>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
jkosina@...e.cz, paulmck@...ux.vnet.ibm.com,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC][PATCH] printk: Fixup the nmi printk mess
On Wed 2015-06-10 17:29:17, Peter Zijlstra wrote:
> On Wed, Jun 10, 2015 at 04:31:55PM +0200, Petr Mladek wrote:
> > If another NMI comes at this point, it will start filling the buffer
> > from the beginning. If it is fast enough, it might override the text
> > that we print above.
>
> How so? If the cmpxchg succeeded and len == 0, we flushed everything and
> are done with it, if another NMI comes in and 'overwrites' it, that's
> fine, right?
Shame on me. Somehow I thought that there was xchg() and not cmpxchg().
> > > +static int vprintk_nmi(const char *fmt, va_list args)
> > > +{
> > > + struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
> > > + unsigned int len = seq_buf_used(&s->seq);
> > > +
> > > + irq_work_queue(&s->work);
> > > + seq_buf_vprintf(&s->seq, fmt, args);
>
> No, everything is strictly per cpu.
I do not know why but I expected that irq_work could get proceed on
any CPU. You are right. They are proceed on the same one.
Similar with the other mail. Sigh, I was too fast yesterday. I am
sorry for the noise.
Best Regards,
Petr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists