[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZktmhL2hos8-IrNb@pathway.suse.cz>
Date: Mon, 20 May 2024 17:04:36 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH printk v5 19/30] printk: nbcon: Provide function to flush
using write_atomic()
On Thu 2024-05-02 23:44:28, John Ogness wrote:
> From: Thomas Gleixner <tglx@...utronix.de>
>
> Provide nbcon_atomic_flush_pending() to perform flushing of all
> registered nbcon consoles using their write_atomic() callback.
>
> Unlike console_flush_all(), nbcon_atomic_flush_pending() will
> only flush up through the newest record at the time of the
> call. This prevents a CPU from printing unbounded when other
> CPUs are adding records. If new records are added while
> flushing, it is expected that the dedicated printer threads
> will print those records. If the printer thread is not
> available (which is always the case at this point in the
> rework), nbcon_atomic_flush_pending() _will_ flush all records
> in the ringbuffer.
>
> Unlike console_flush_all(), nbcon_atomic_flush_pending() will
> fully flush one console before flushing the next. This helps to
> guarantee that a block of pending records (such as a stack
> trace in an emergency situation) can be printed atomically at
> once before releasing console ownership.
>
> nbcon_atomic_flush_pending() is safe in any context because it
> uses write_atomic() and acquires with unsafe_takeover disabled.
>
> Use it in console_flush_on_panic() before flushing legacy
> consoles. The legacy write() callbacks are not fully safe when
> oops_in_progress is set.
>
> Also use it in nbcon_driver_release() to flush records added
> while the driver had the console locked to perform non-printing
> operations.
>
> --- a/kernel/printk/nbcon.c
> +++ b/kernel/printk/nbcon.c
> @@ -953,6 +952,148 @@ enum nbcon_prio nbcon_get_default_prio(void)
> return NBCON_PRIO_NORMAL;
> }
>
> +/*
> + * __nbcon_atomic_flush_pending_con - Flush specified nbcon console using its
> + * write_atomic() callback
> + * @con: The nbcon console to flush
> + * @stop_seq: Flush up until this record
> + *
> + * Return: 0 if @con was flushed up to @stop_seq Otherwise, error code on
> + * failure.
> + *
> + * Errors:
> + *
> + * -EPERM: Unable to acquire console ownership.
> + *
> + * -EAGAIN: Another context took over ownership while printing.
> + *
> + * -ENOENT: A record before @stop_seq is not available.
> + *
> + * If flushing up to @stop_seq was not successful, it only makes sense for the
> + * caller to try again when -EAGAIN was returned. When -EPERM is returned,
> + * this context is not allowed to acquire the console. When -ENOENT is
> + * returned, it cannot be expected that the unfinalized record will become
> + * available.
> + */
> +static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq)
> +{
> + struct nbcon_write_context wctxt = { };
> + struct nbcon_context *ctxt = &ACCESS_PRIVATE(&wctxt, ctxt);
> + int err = 0;
> +
> + ctxt->console = con;
> + ctxt->spinwait_max_us = 2000;
> + ctxt->prio = nbcon_get_default_prio();
> +
> + if (!nbcon_context_try_acquire(ctxt))
> + return -EPERM;
> +
> + while (nbcon_seq_read(con) < stop_seq) {
> + /*
> + * nbcon_emit_next_record() returns false when the console was
> + * handed over or taken over. In both cases the context is no
> + * longer valid.
> + */
> + if (!nbcon_emit_next_record(&wctxt))
> + return -EAGAIN;
> +
> + if (!ctxt->backlog) {
This would deserve a comment:
/* Are there reserved and no-yet finalized records? */
> + if (nbcon_seq_read(con) < stop_seq)
> + err = -ENOENT;
> + break;
> + }
> + }
> +
> + nbcon_context_release(ctxt);
> + return err;
> +}
> +
> +/**
> + * nbcon_atomic_flush_pending_con - Flush specified nbcon console using its
> + * write_atomic() callback
> + * @con: The nbcon console to flush
> + * @stop_seq: Flush up until this record
> + *
> + * This will stop flushing before @stop_seq if another context has ownership.
> + * That context is then responsible for the flushing. Likewise, if new records
> + * are added while this context was flushing and there is no other context
> + * to handle the printing, this context must also flush those records.
> + */
> +static void nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq)
> +{
> + unsigned long flags;
> + int err;
> +
> +again:
> + /*
> + * Atomic flushing does not use console driver synchronization (i.e.
> + * it does not hold the port lock for uart consoles). Therefore IRQs
> + * must be disabled to avoid being interrupted and then calling into
> + * a driver that will deadlock trying to acquire console ownership.
> + */
> + local_irq_save(flags);
> +
> + err = __nbcon_atomic_flush_pending_con(con, stop_seq);
> +
> + local_irq_restore(flags);
> +
> + /*
> + * If flushing was successful but more records are available, this
> + * context must flush those remaining records because there is no
> + * other context that will do it.
> + */
> + if (!err && prb_read_valid(prb, nbcon_seq_read(con), NULL)) {
> + stop_seq = prb_next_reserve_seq(prb);
> + goto again;
> + }
> +
> + /*
> + * If there was a new owner, that context is responsible for
> + * completing the flush.
> + */
This is a bit weird code layout. I wonder if it will get extended
in some future patchset but...
Anyway, there are three possible errors and the above paragraph
talks about one situation. Let's get through them:
-EPERM: OK, can't do much
-EAGAIN: OK, the other context is responsible for flush
-ENOENT: ??? It took me quite some time to scratch my head
around this. IMHO, it makes sense after all but it
would deserve a comment.
What about reshufling the code a bit?
<proposal>
/*
* If there was a new owner (-EPERM, -EAGAIN), that context is
* responsible for completing.
*
* Do not wait for not-yet finalized records (-ENOENT) to a possible
* deadlock. They will either get flushed by the writer or eventually
* skipped on panic CPU.
*/
if (err)
return;
/*
* If flushing was successful but more records are available, this
* context must flush those remaining records because there is no
* other context that will do it.
*/
if (prb_read_valid(prb, nbcon_seq_read(con), NULL)) {
stop_seq = prb_next_reserve_seq(prb);
goto again;
}
</proposal>
> +}
> +
> @@ -1064,8 +1205,23 @@ EXPORT_SYMBOL_GPL(nbcon_driver_try_acquire);
> void (struct console *con)
> {
> struct nbcon_context *ctxt = &ACCESS_PRIVATE(con, nbcon_driver_ctxt);
> + int cookie;
>
> - if (nbcon_context_exit_unsafe(ctxt))
> - nbcon_context_release(ctxt);
> + if (!nbcon_context_exit_unsafe(ctxt))
> + return;
> +
> + nbcon_context_release(ctxt);
> +
> + /*
> + * This context must flush any new records added while the console
> + * was locked. The console_srcu_read_lock must be taken to ensure
> + * the console is usable throughout flushing.
> + */
> + cookie = console_srcu_read_lock();
In principle, this should not be needed because the console is
added/removed under con->device_lock() in register_console()/unregister_console().
And this function nbcon_driver_release() should be called under the
same lock.
If only we could add an lockdep_assert() here. But I can't think of
any simple solution.
> + if (console_is_usable(con, console_srcu_read_flags(con)) &&
> + prb_read_valid(prb, nbcon_seq_read(con), NULL)) {
> + __nbcon_atomic_flush_pending_con(con, prb_next_reserve_seq(prb));
> + }
> + console_srcu_read_unlock(cookie);
> }
> EXPORT_SYMBOL_GPL(nbcon_driver_release);
Best Regards,
Petr
Powered by blists - more mailing lists