[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQ2JU9PyZIvu4spa@alley>
Date: Fri, 22 Sep 2023 14:32:19 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH printk v2 05/11] printk: nbcon: Provide function for
atomic flushing
On Wed 2023-09-20 01:14:50, John Ogness wrote:
> From: Thomas Gleixner <tglx@...utronix.de>
>
> Provide nbcon_atomic_flush() to perform atomic write flushing
> of all registered nbcon consoles. Like with legacy consoles,
> the nbcon consoles are flushed one record per console. This
> allows all nbcon consoles to generate pseudo-simultaneously,
> rather than one console waiting for the full ringbuffer to
> dump to another console before printing anything.
>
> Note that if the current CPU is in a nested elevated priority
> state (EMERGENCY/PANIC), nbcon_atomic_flush() does nothing.
This confused me a bit. It was not clear to me if it was
"nested and elevated" or "the elevated priority was nested".
Well, it is probably because I am not a native speaker.
I would describe it another way, see below.
> This is in case the printing itself generates urgent messages
> (OOPS/WARN/PANIC), that those messages are fully stored into
> the ringbuffer before any printing resumes.
This feels like it was an advantage. But I would say that it is
a limitation. IMHO, it simply works this way and we should describe
it as a limitation. See below.
> --- a/kernel/printk/nbcon.c
> +++ b/kernel/printk/nbcon.c
> @@ -988,6 +986,105 @@ static __ref struct nbcon_cpu_state *nbcon_get_cpu_state(void)
> return this_cpu_ptr(&nbcon_pcpu_state);
> }
>
> +/**
> + * nbcon_atomic_emit_one - Print one record for a console in atomic mode
> + * @wctxt: An initialized write context struct to use
> + * for this context
> + *
> + * Returns false if the given console could not print a record or there are
> + * no more records to print, otherwise true.
> + *
> + * This is an internal helper to handle the locking of the console before
> + * calling nbcon_emit_next_record().
> + */
> +static bool nbcon_atomic_emit_one(struct nbcon_write_context *wctxt)
> +{
> + struct nbcon_context *ctxt = &ACCESS_PRIVATE(wctxt, ctxt);
> +
> + if (!nbcon_context_try_acquire(ctxt))
> + return false;
> +
> + /*
> + * nbcon_emit_next_record() returns false when the console was
> + * handed over or taken over. In both cases the context is no
> + * longer valid.
> + */
> + if (!nbcon_emit_next_record(wctxt))
> + return false;
> +
> + nbcon_context_release(ctxt);
> +
> + return prb_read_valid(prb, ctxt->seq, NULL);
IMHO, it should be enough to check ctxt->backlog. I mean to do:
return !!ctxt->backlog;
We are here only when nbcon_emit_next_record() owned the context and
was able to call printk_get_next_message(). nbcon_emit_next_record()
and nbcon_atomic_emit_one() would work consistently especially
when prb_read_valid() is not called under the console lock here.
> +}
> +
> +/**
> + * __nbcon_atomic_flush_all - Flush all nbcon consoles in atomic mode
> + * @allow_unsafe_takeover: True, to allow unsafe hostile takeovers
> + */
> +static void __nbcon_atomic_flush_all(bool allow_unsafe_takeover)
> +{
> + struct nbcon_write_context wctxt = { };
> + struct nbcon_context *ctxt = &ACCESS_PRIVATE(&wctxt, ctxt);
> + struct nbcon_cpu_state *cpu_state;
> + struct console *con;
> + bool any_progress;
> + int cookie;
> +
> + cpu_state = nbcon_get_cpu_state();
> +
> + /*
> + * Let the outermost flush of this priority print. This avoids
> + * nasty hackery for nested WARN() where the printing itself
> + * generates one and ensures such nested messages are stored to
> + * the ringbuffer before any printing resumes.
It is not clear to me what hackery was meant. The fact is that
only printk_once() or WARN_ONCE() should be used in the console
emit/flush code paths. Any non-once printk might block consoles
and even these nesting checks probably would not help much.
Anyway, I believe that we do not need this nesting counter.
The nesting is already prevented by nbcon_context_try_acquire().
It would not allow to take the nested lock with the same priority.
I guess that this extra nesting counter made some sense only in the earlier
variants of the per-cpu console lock.
I would personally just describe the behavior in the commit message
and in the comment above __nbcon_atomic_flush_all():
* The messages are flushed only when this context is able to
* get the per-console lock. Namely, it works only when the
* lock is free or when this context has a higher priority
* than the current owner.
> + *
> + * cpu_state->prio <= NBCON_PRIO_NORMAL is not subject to nesting
> + * and can proceed in order to allow any atomic printing for
> + * regular kernel messages.
> + */
> + if (cpu_state->prio > NBCON_PRIO_NORMAL &&
> + cpu_state->nesting[cpu_state->prio] != 1)
> + return;
> +
> + do {
> + any_progress = false;
> +
> + cookie = console_srcu_read_lock();
> + for_each_console_srcu(con) {
> + short flags = console_srcu_read_flags(con);
> + bool progress;
> +
> + if (!(flags & CON_NBCON))
> + continue;
> +
> + if (!console_is_usable(con, flags))
> + continue;
> +
> + memset(ctxt, 0, sizeof(*ctxt));
> + ctxt->console = con;
> + ctxt->spinwait_max_us = 2000;
> + ctxt->prio = cpu_state->prio;
> + ctxt->allow_unsafe_takeover = allow_unsafe_takeover;
> +
> + progress = nbcon_atomic_emit_one(&wctxt);
> + if (!progress)
> + continue;
> + any_progress = true;
> + }
> + console_srcu_read_unlock(cookie);
> + } while (any_progress);
> +}
> +
> +/**
> + * nbcon_atomic_flush_all - Flush all nbcon consoles in atomic mode
> + *
> + * Context: Any context where migration is disabled.
We should make it more clear what migration is meant here. For
example:
* Context: Any context which could not be migrated to another CPU.
> + */
> +void nbcon_atomic_flush_all(void)
> +{
> + __nbcon_atomic_flush_all(false);
> +}
> +
Best Regards,
Petr
Powered by blists - more mailing lists