[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180109170847.28b41eec@vmware.local.home>
Date: Tue, 9 Jan 2018 17:08:47 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Tejun Heo <tj@...nel.org>
Cc: Petr Mladek <pmladek@...e.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Rafael Wysocki <rjw@...ysocki.net>,
Pavel Machek <pavel@....cz>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [RFC][PATCHv6 00/12] printk: introduce printing kernel thread
On Tue, 9 Jan 2018 12:06:20 -0800
Tejun Heo <tj@...nel.org> wrote:
> What's happening is that the OOM killer is trapped flushing printk
> failing to clear the memory condition and that leads irq / softirq
> contexts to produce messages faster than can be flushed. I don't see
> how we'd be able to clear the condition without introducing an
> independent context to flush the ring buffer.
>
> Again, this is an actual problem that we've been seeing fairly
> regularly in production machines.
But your test case is pinned to a single CPU. You have a work queue
that does a printk and triggers an timer interrupt to go off on that
same CPU. Then the timer interrupt does a 10,000 printks, over and over
on the same CPU. Of course that will be an issue, and it is NOT similar
to the scenario that you listed above.
The scenario you listed would affect multiple CPUs and multiple CPUs
would be flooding printk. In that case my patch WILL help. Because the
current method, the first CPU to do the printk will get stuck doing the
printk for ALL OTHER CPUs. With my patch, the printk load will migrate
around and there will not be a single CPU that is stuck.
So no, your test is not realistic.
-- Steve
Powered by blists - more mailing lists