[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171204134825.7822-13-sergey.senozhatsky@gmail.com>
Date: Mon, 4 Dec 2017 22:48:25 +0900
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To: Steven Rostedt <rostedt@...dmis.org>,
Petr Mladek <pmladek@...e.com>
Cc: Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Rafael Wysocki <rjw@...ysocki.net>,
Pavel Machek <pavel@....cz>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: [RFC][PATCHv6 12/12] printk: improve printk offloading mechanism
The existing offloading mechanism break out of console_unlock()
loop without knowing if offloading will ever succeed. This is
not a big problem for
while (...)
printk()
loops, because the control path will return back to console_unlock()
anyway; which is not always true for the following case
CPU0 CPU1
console_lock()
printk()
...
printk()
console_unlock()
breaking out of console_unlock() in this case might leave pending
messages in the logbuf.
Steven Rostedt came up with the following printing hand off scheme [1]:
: I added a "console_owner" which is set to a task that is actively
: writing to the consoles. It is *not* the same an the owner of the
: console_lock. It is only set when doing the calls to the console
: functions. It is protected by a console_owner_lock which is a raw spin
: lock.
:
: There is a console_waiter. This is set when there is an active console
: owner that is not current, and waiter is not set. This too is protected
: by console_owner_lock.
:
: In printk() when it tries to write to the consoles, we have:
:
: if (console_trylock())
: console_unlock();
:
: Now I added an else, which will check if there is an active owner, and
: no current waiter. If that is the case, then console_waiter is set, and
: the task goes into a spin until it is no longer set.
:
: When the active console owner finishes writing the current message to
: the consoles, it grabs the console_owner_lock and sees if there is a
: waiter, and clears console_owner.
:
: If there is a waiter, then it breaks out of the loop, clears the waiter
: flag (because that will release the waiter from its spin), and exits.
: Note, it does *not* release the console semaphore. Because it is a
: semaphore, there is no owner. Another task may release it. This means
: that the waiter is guaranteed to be the new console owner! Which it
: becomes.
:
: Then the waiter calls console_unlock() and continues to write to the
: consoles.
This patch is based on Steven's idea. The key difference is that we are
using printk_kthread to hand off printing to; and we hand off using the
existing printk offloading logic. So we have only one task doing the
print out in console_unlock() and only one task we can hand off printing
to - printk_kthread. Since console_owner is set only during call to
console drivers, printk_kthread can be waiting for hand off from two
paths:
a) a fast path - console_owner is set and printk_kthread is looping
waiting for printing CPU to hand off (SPINNING_WAITER_HANDOFF case)
b) a slow path - console_owner is not set, so printk_kthread cannot
spin on console_owner (SLEEPING_WAITER_HANDOFF case). Instead it
must sleep on console_sem semaphore and hand off scheme in this
case involves up().
The printing CPU then detects from console_unlock() loop that
printk_kthread is either in SPINNING_WAITER_HANDOFF state and
thus we need to hand off console semaphore ownership; or in
SLEEPING_WAITER_HANDOFF state, so we need to up() console
semaphore and wake up it.
[1] lkml.kernel.org/r/20171108102723.602216b1@...dalf.local.home
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
---
kernel/printk/printk.c | 180 ++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 169 insertions(+), 11 deletions(-)
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 2a1ec075cc13..2395f18fec53 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -411,6 +411,13 @@ DEFINE_RAW_SPINLOCK(logbuf_lock);
static unsigned long printk_pending;
+enum console_handoff {
+ DONT_HANDOFF = 0,
+ SPINNING_WAITER_HANDOFF = (1 << 0),
+ SLEEPING_WAITER_HANDOFF = (1 << 1),
+ PRINTK_KTHREAD_HANDOFF = (1 << 2),
+};
+
#ifdef CONFIG_PRINTK
DECLARE_WAIT_QUEUE_HEAD(log_wait);
/* the next printk record to read by syslog(READ) or /proc/kmsg */
@@ -467,6 +474,16 @@ module_param_named(offloading_enabled, offloading_enabled, bool, 0644);
MODULE_PARM_DESC(offloading_enabled,
"enable/disable print out offloading to printk kthread");
+#ifdef CONFIG_LOCKDEP
+static struct lockdep_map console_owner_dep_map = {
+ .name = "console_owner"
+};
+#endif
+
+static DEFINE_RAW_SPINLOCK(console_owner_lock);
+static bool active_console_owner;
+static unsigned long console_handoff_waiter;
+
static inline bool printk_offloading_enabled(void)
{
if (system_state != SYSTEM_RUNNING || oops_in_progress)
@@ -573,6 +590,43 @@ void touch_printk_offloading_watchdog_on_cpu(unsigned int cpu)
per_cpu(printing_elapsed, cpu) = 0;
}
+static void spinning_waiter_handoff_enable(void)
+{
+ raw_spin_lock(&console_owner_lock);
+ active_console_owner = true;
+ raw_spin_unlock(&console_owner_lock);
+ /* The waiter may spin on us after setting console_owner */
+ spin_acquire(&console_owner_dep_map, 0, 0, _THIS_IP_);
+}
+
+static unsigned long spinning_waiter_handoff_disable(void)
+{
+ unsigned long waiter;
+
+ raw_spin_lock(&console_owner_lock);
+ active_console_owner = false;
+ waiter = READ_ONCE(console_handoff_waiter);
+ raw_spin_unlock(&console_owner_lock);
+
+ if (!(waiter == SPINNING_WAITER_HANDOFF)) {
+ /* There was no waiter, and nothing will spin on us here */
+ spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+ }
+ return waiter;
+}
+
+static void console_handoff_printing(void)
+{
+ WRITE_ONCE(console_handoff_waiter, DONT_HANDOFF);
+ /* The waiter is now free to continue */
+ spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+ /*
+ * Hand off console_lock to waiter. The waiter will perform
+ * the up(). After this, the waiter is the console_lock owner.
+ */
+ mutex_release(&console_lock_dep_map, 1, _THIS_IP_);
+}
+
/*
* Under heavy printing load or with a slow serial console (or both)
* console_unlock() can stall CPUs, which can result in soft/hard-lockups,
@@ -582,24 +636,35 @@ void touch_printk_offloading_watchdog_on_cpu(unsigned int cpu)
* This function must be called from 'printk_safe' context under
* console_sem lock with preemption disabled.
*/
-static inline bool should_handoff_printing(u64 printing_start_ts)
+static inline enum console_handoff
+should_handoff_printing(u64 printing_start_ts)
{
static struct task_struct *printing_task;
u64 now = local_clock();
bool emergency = !printk_offloading_enabled();
+ unsigned long waiter = spinning_waiter_handoff_disable();
/* We are in emergency mode, disable printk_kthread */
if (printk_kthread_should_stop(emergency))
- return true;
+ return PRINTK_KTHREAD_HANDOFF;
+
+ /*
+ * There is a printk_kthread waiting for us to release the
+ * console_sem, either in SPINNING_WAITER_HANDOFF or in
+ * SLEEPING_WAITER_HANDOFF mode. console_unlock() will take
+ * care of it.
+ */
+ if (waiter)
+ return waiter;
if (emergency)
- return false;
+ return DONT_HANDOFF;
/* A new task - reset the counters. */
if (printing_task != current) {
touch_printk_offloading_watchdog();
printing_task = current;
- return false;
+ return DONT_HANDOFF;
}
if (time_after_eq64(now, printing_start_ts)) {
@@ -612,7 +677,7 @@ static inline bool should_handoff_printing(u64 printing_start_ts)
/* Shrink down to seconds and check the offloading threshold */
if ((__this_cpu_read(printing_elapsed) >> 30LL) <
offloading_threshold())
- return false;
+ return DONT_HANDOFF;
if (current == printk_kthread) {
unsigned int cpu;
@@ -631,7 +696,7 @@ static inline bool should_handoff_printing(u64 printing_start_ts)
*/
for_each_possible_cpu(cpu)
touch_printk_offloading_watchdog_on_cpu(cpu);
- return true;
+ return PRINTK_KTHREAD_HANDOFF;
}
/*
@@ -651,7 +716,7 @@ static inline bool should_handoff_printing(u64 printing_start_ts)
set_cpus_allowed_ptr(printk_kthread, printk_offload_cpus);
wake_up_process(printk_kthread);
- return true;
+ return DONT_HANDOFF;
}
/* Return log buffer address */
@@ -2100,10 +2165,16 @@ EXPORT_SYMBOL_GPL(printk_emergency_begin_sync);
int printk_emergency_end_sync(void) { return 0; }
EXPORT_SYMBOL_GPL(printk_emergency_end_sync);
-static bool should_handoff_printing(u64 printing_start_ts) { return false; }
+static enum console_handoff should_handoff_printing(u64 printing_start_ts)
+{
+ return DONT_HANDOFF;
+}
+
static bool printk_offloading_enabled(void) { return false; }
void touch_printk_offloading_watchdog(void) {}
void touch_printk_offloading_watchdog_on_cpu(unsigned int cpu) {}
+static void spinning_waiter_handoff_enable(void) {}
+static void console_handoff_printing(void) {}
#endif /* CONFIG_PRINTK */
#ifdef CONFIG_EARLY_PRINTK
@@ -2385,7 +2456,7 @@ void console_unlock(void)
unsigned long flags;
bool wake_klogd = false;
bool do_cond_resched, retry = false;
- bool do_handoff = false;
+ enum console_handoff do_handoff = DONT_HANDOFF;
if (console_suspended) {
/*
@@ -2492,12 +2563,43 @@ void console_unlock(void)
console_seq++;
raw_spin_unlock(&logbuf_lock);
+ /*
+ * Disable is called from should_handoff_printing(). See
+ * comment below.
+ */
+ spinning_waiter_handoff_enable();
+
stop_critical_timings(); /* don't trace print latency */
call_console_drivers(ext_text, ext_len, text, len);
start_critical_timings();
/* Must be called under printk_safe */
do_handoff = should_handoff_printing(printing_start_ts);
+ /*
+ * We have two paths for hand off:
+ *
+ * 1) a fast one when we have a SPINNING_WAITER with local
+ * IRQs disables on another CPU;
+ *
+ * 2) a slow one when we have a SLEEPING_WAITER on the
+ * console_sem.
+ *
+ * For the fast path we pass off the printing to the waiter.
+ * The waiter will continue printing on its CPU and up() the
+ * console_sem.
+ *
+ * For slow path we need to go through the 'normal' return
+ * from console_unlock(), which involves up_console_sem().
+ *
+ * When all writing has finished, the last printer will wake
+ * up klogd.
+ */
+ if (do_handoff == SPINNING_WAITER_HANDOFF) {
+ console_handoff_printing();
+ printk_safe_exit_irqrestore(flags);
+ preempt_enable();
+ return;
+ }
printk_safe_exit_irqrestore(flags);
if (!do_handoff && do_cond_resched) {
@@ -2522,8 +2624,11 @@ void console_unlock(void)
* if there's something to flush. In case when trylock fails,
* there's a new owner and the console_unlock() from them will
* do the flush, no worries.
+ *
+ * Do not retry printing if we have SLEEPING_WAITER, up() should
+ * wake him up.
*/
- if (!do_handoff) {
+ if (do_handoff == DONT_HANDOFF) {
raw_spin_lock(&logbuf_lock);
retry = console_seq != log_next_seq;
if (!retry)
@@ -2990,6 +3095,9 @@ static struct syscore_ops printk_syscore_ops = {
static int printk_kthread_func(void *data)
{
while (1) {
+ unsigned long flags;
+ bool spin = false;
+
set_current_state(TASK_INTERRUPTIBLE);
/*
* We must check `printk_emergency' as well, to let
@@ -3012,7 +3120,57 @@ static int printk_kthread_func(void *data)
if (kthread_should_park())
kthread_parkme();
- console_lock();
+ local_irq_save(flags);
+ raw_spin_lock(&console_owner_lock);
+ /*
+ * Printing CPU has requested printk_kthread offloading. There
+ * are two cases here:
+ * a) `active_console_owner' is set, so we can be a
+ * SPINNING_WAITER and wait busy-looping for printing CPU
+ * to transfer console_sem ownership to us.
+ *
+ * b) otherwise, the printing task has IRQs enabled and it
+ * may be interrupted anytime, while still holding the
+ * console_sem. We must become a SLEEPING_WAITER and do
+ * console_lock(). The printing task will do up() as soon
+ * as possible.
+ */
+ if (active_console_owner) {
+ spin = true;
+ WRITE_ONCE(console_handoff_waiter,
+ SPINNING_WAITER_HANDOFF);
+ } else {
+ spin = false;
+ WRITE_ONCE(console_handoff_waiter,
+ SLEEPING_WAITER_HANDOFF);
+ }
+ raw_spin_unlock(&console_owner_lock);
+
+ if (spin) {
+ /* We spin waiting for the owner to release us */
+ spin_acquire(&console_owner_dep_map, 0, 0, _THIS_IP_);
+ /* Owner will clear SPINNING_WAITER bit on hand off */
+ while (READ_ONCE(console_handoff_waiter) ==
+ SPINNING_WAITER_HANDOFF)
+ cpu_relax();
+
+ spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+ local_irq_restore(flags);
+ /*
+ * The owner passed the console lock to us.
+ * Since we did not spin on console lock, annotate
+ * this as a trylock. Otherwise lockdep will
+ * complain.
+ */
+ mutex_acquire(&console_lock_dep_map, 0, 1, _THIS_IP_);
+ } else {
+ local_irq_restore(flags);
+ console_lock();
+
+ /* We've been woken up by up() */
+ WRITE_ONCE(console_handoff_waiter, DONT_HANDOFF);
+ }
+
console_unlock();
/* We might have been blocked on console_sem */
--
2.15.1
Powered by blists - more mailing lists