[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e763af8ab34d96fcd02e76ad7b41f6b95c78850a.1764272407.git.chris@chrisdown.name>
Date: Fri, 28 Nov 2025 03:43:32 +0800
From: Chris Down <chris@...isdown.name>
To: Petr Mladek <pmladek@...e.com>
Cc: linux-kernel@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
John Ogness <john.ogness@...utronix.de>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Tony Lindgren <tony.lindgren@...ux.intel.com>, kernel-team@...com
Subject: [PATCH v8 06/21] printk: nbcon: Synchronise console unregistration
against atomic flushers
The nbcon atomic flush path in __nbcon_atomic_flush_pending_con() calls
nbcon_emit_next_record(), which uses console_srcu_read_flags() to read
the console flags. console_srcu_read_flags() expects to be called under
console_srcu_read_lock(), but the atomic flush path does not hold this
lock.
While console_srcu_read_flags() works without the lock in practice, this
violates the SRCU contract: without holding the lock,
unregister_console() cannot properly synchronise against concurrent
atomic flushers, since synchronize_srcu() would not wait for them.
Wrap the atomic flush critical section with console_srcu_read_lock() and
console_srcu_read_unlock(). This ensures that:
1. unregister_console() can safely synchronise against atomic flushers
via synchronize_srcu() before proceeding with console teardown.
2. The SRCU protection guarantees that console state remains valid
(CON_SUSPENDED/CON_ENABLED) and that exit/cleanup routines will not
run while the atomic flusher is operating.
The locking is placed around the entire flush operation rather than just
the flags read, as future changes will add additional SRCU-protected
reads in this path.
Signed-off-by: Chris Down <chris@...isdown.name>
---
kernel/printk/nbcon.c | 30 +++++++++++++++++++++++++-----
1 file changed, 25 insertions(+), 5 deletions(-)
diff --git a/kernel/printk/nbcon.c b/kernel/printk/nbcon.c
index eb4c8faa213d..493c9e8b2dd5 100644
--- a/kernel/printk/nbcon.c
+++ b/kernel/printk/nbcon.c
@@ -1498,15 +1498,29 @@ static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
{
struct nbcon_write_context wctxt = { };
struct nbcon_context *ctxt = &ACCESS_PRIVATE(&wctxt, ctxt);
+ bool ctx_acquired = false;
int err = 0;
+ int cookie;
ctxt->console = con;
ctxt->spinwait_max_us = 2000;
ctxt->prio = nbcon_get_default_prio();
ctxt->allow_unsafe_takeover = allow_unsafe_takeover;
- if (!nbcon_context_try_acquire(ctxt, false))
- return -EPERM;
+ /*
+ * Match the console_srcu_read_lock()/unlock expectation embedded in
+ * console_srcu_read_flags(), which is called from nbcon_emit_next_record().
+ * Without this, unregister_console() cannot synchronise against the
+ * atomic flusher.
+ */
+ cookie = console_srcu_read_lock();
+
+ if (!nbcon_context_try_acquire(ctxt, false)) {
+ err = -EPERM;
+ goto out_unlock;
+ }
+
+ ctx_acquired = true;
while (nbcon_seq_read(con) < stop_seq) {
/*
@@ -1514,8 +1528,11 @@ static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
* handed over or taken over. In both cases the context is no
* longer valid.
*/
- if (!nbcon_emit_next_record(&wctxt, true))
- return -EAGAIN;
+ if (!nbcon_emit_next_record(&wctxt, true)) {
+ err = -EAGAIN;
+ ctx_acquired = false;
+ goto out_unlock;
+ }
if (!ctxt->backlog) {
/* Are there reserved but not yet finalized records? */
@@ -1525,7 +1542,10 @@ static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
}
}
- nbcon_context_release(ctxt);
+out_unlock:
+ if (ctx_acquired)
+ nbcon_context_release(ctxt);
+ console_srcu_read_unlock(cookie);
return err;
}
--
2.51.2
Powered by blists - more mailing lists