[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140625122149.GL8769@pathway.suse.cz>
Date: Wed, 25 Jun 2014 14:21:49 +0200
From: Petr Mládek <pmladek@...e.cz>
To: Konstantin Khlebnikov <koct9i@...il.com>
Cc: Jiri Kosina <jkosina@...e.cz>,
Steven Rostedt <rostedt@...dmis.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.cz>, Jan Kara <jack@...e.cz>,
Frederic Weisbecker <fweisbec@...il.com>,
Dave Anderson <anderson@...hat.com>
Subject: Re: [RFC][PATCH 0/3] x86/nmi: Print all cpu stacks from NMI safely
On Tue 2014-06-24 17:32:15, Konstantin Khlebnikov wrote:
>
> Instead of per-cpu buffers printk might use part of existing ring
> buffer -- initiator cpu allocates space for
> target cpu and flushes it into common stream after it finish printing.
> Probably this kind of transactional model might be used on single cpu
> for multi-line KERN_CONT.
I wanted to think more about this.
The evident advantage is that we do not need extra buffers. Also it might
be used instead of the cont buffer.
One challenge is that we do not know how much space is needed. If
we do not reserve enough, we might not be able to reserve it later.
This would happen when the main ring buffer is locked by some
blocked process. If we reserve too much, it is a waste of information.
Note that we must remove old messages to make the space.
Also we could not rotate the ring buffer over the reserved space. Then
we might miss the last messages from the normal context when the
system goes down.
Another question is if it could simplify the printk() code. The reservation
logic would be probably more complicated than the current logic around
the cont buffer.
Best Regards,
Petr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists