[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170403124342.GE3452@pathway.suse.cz>
Date: Mon, 3 Apr 2017 14:43:42 +0200
From: Petr Mladek <pmladek@...e.com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Steven Rostedt <rostedt@...dmis.org>, Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Rafael J . Wysocki" <rjw@...ysocki.net>,
Eric Biederman <ebiederm@...ssion.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jiri Slaby <jslaby@...e.com>, Pavel Machek <pavel@....cz>,
Len Brown <len.brown@...el.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCHv2 1/8] printk: move printk_pending out of per-cpu
On Mon 2017-04-03 20:23:01, Sergey Senozhatsky wrote:
> On (03/31/17 15:33), Peter Zijlstra wrote:
> > On Fri, Mar 31, 2017 at 03:09:50PM +0200, Petr Mladek wrote:
> > > On Wed 2017-03-29 18:25:04, Sergey Senozhatsky wrote:
> >
> > > > if (waitqueue_active(&log_wait)) {
> > > > - this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
> > > > + set_bit(PRINTK_PENDING_WAKEUP, &printk_pending);
> > >
> > > We should add here a write barrier:
> > >
> > > /*
> > > * irq_work_queue() uses cmpxchg() and implies the memory
> > > * barrier only when the work is queued. An explicit barrier
> > > * is needed here to make sure that wake_up_klogd_work_func()
> > > * sees printk_pending set even when the work was already queued
> > > * because of an other pending event.
> > > */
> > > smp_wmb();
> > >
> > > > irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
> > > > }
> > > > preempt_enable();
> >
> > smp_mb__after_atomic() is probably better, because if you're not
> > ordering with the cmpxchg, you're ordering against a load done by
> > cmpxchg to see it doesn't need to do anything.
>
> Petr and Peter, thanks for the review.
>
> can you educate me, what exactly is broken there?
Good point!
> when called from console_unlock(), we have something as follows
>
> console_unlock()
> {
> for (;;) {
> spin_lock_irqsave();
> ...
> spin_unlock_irqrestore();
> ...
> }
>
> spin_unlock_irqrestore();
>
> <<IRQs enabled>>
>
> if (wake_klogd)
> wake_up_klogd()
> {
> set_bit(PRINTK_PENDING_WAKEUP, &printk_pending);
> irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
> }
> }
>
>
> we queue a per-CPU irq_work.
Ah, I forgot that irq_work is still per-CPU. In this case, everything
seems to be safe even without the barrier. The important thing is that
there always will be queued an irq_work that will see and handle the
bit. I believe that the barrier would be needed if the irq_work was
global.
I am sorry for the noise.
Best Regards,
Petr
Powered by blists - more mailing lists