lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ztbgi4-gDvxMYMXw@pathway.suse.cz>
Date: Tue, 3 Sep 2024 12:10:19 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH printk v5 09/17] printk: nbcon: Rely on kthreads for
 normal operation

On Fri 2024-08-30 17:35:08, John Ogness wrote:
> Once the kthread is running and available
> (i.e. @printk_kthreads_running is set), the kthread becomes
> responsible for flushing any pending messages which are added
> in NBCON_PRIO_NORMAL context. Namely the legacy
> console_flush_all() and device_release() no longer flush the
> console. And nbcon_atomic_flush_pending() used by
> nbcon_cpu_emergency_exit() no longer flushes messages added
> after the emergency messages.
> 
> The console context is safe when used by the kthread only when
> one of the following conditions are true:
> 
>   1. Other caller acquires the console context with
>      NBCON_PRIO_NORMAL with preemption disabled. It will
>      release the context before rescheduling.
> 
>   2. Other caller acquires the console context with
>      NBCON_PRIO_NORMAL under the device_lock.
> 
>   3. The kthread is the only context which acquires the console
>      with NBCON_PRIO_NORMAL.
> 
> This is satisfied for all atomic printing call sites:
> 
> nbcon_legacy_emit_next_record() (#1)
> 
> nbcon_atomic_flush_pending_con() (#1)
> 
> nbcon_device_release() (#2)
> 
> It is even double guaranteed when @printk_kthreads_running
> is set because then _only_ the kthread will print for
> NBCON_PRIO_NORMAL. (#3)
> 
> Signed-off-by: John Ogness <john.ogness@...utronix.de>
> ---
>  kernel/printk/internal.h | 26 ++++++++++++++++++++++
>  kernel/printk/nbcon.c    | 17 ++++++++++-----
>  kernel/printk/printk.c   | 47 +++++++++++++++++++++++++++++++++++++++-
>  3 files changed, 83 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
> index a96d4114a1db..8166e24f8780 100644
> --- a/kernel/printk/internal.h
> +++ b/kernel/printk/internal.h
> @@ -113,6 +113,13 @@ static inline bool console_is_usable(struct console *con, short flags, bool use_
>  		/* The write_atomic() callback is optional. */
>  		if (use_atomic && !con->write_atomic)
>  			return false;
> +
> +		/*
> +		 * For the !use_atomic case, @printk_kthreads_running is not
> +		 * checked because the write_thread() callback is also used
> +		 * via the legacy loop when the printer threads are not
> +		 * available.
> +		 */
>  	} else {
>  		if (!con->write)
>  			return false;
> @@ -176,6 +183,7 @@ static inline void nbcon_atomic_flush_pending(void) { }
>  static inline bool nbcon_legacy_emit_next_record(struct console *con, bool *handover,
>  						 int cookie, bool use_atomic) { return false; }
>  static inline void nbcon_kthread_wake(struct console *con) { }
> +static inline void nbcon_kthreads_wake(void) { }
>  
>  static inline bool console_is_usable(struct console *con, short flags,
>  				     bool use_atomic) { return false; }
> @@ -190,6 +198,7 @@ extern bool legacy_allow_panic_sync;
>  /**
>   * struct console_flush_type - Define available console flush methods
>   * @nbcon_atomic:	Flush directly using nbcon_atomic() callback
> + * @nbcon_offload:	Offload flush to printer thread
>   * @legacy_direct:	Call the legacy loop in this context
>   * @legacy_offload:	Offload the legacy loop into IRQ
>   *
> @@ -197,6 +206,7 @@ extern bool legacy_allow_panic_sync;
>   */
>  struct console_flush_type {
>  	bool	nbcon_atomic;
> +	bool	nbcon_offload;
>  	bool	legacy_direct;
>  	bool	legacy_offload;
>  };
> @@ -211,6 +221,22 @@ static inline void printk_get_console_flush_type(struct console_flush_type *ft)
>  
>  	switch (nbcon_get_default_prio()) {
>  	case NBCON_PRIO_NORMAL:
> +		if (have_nbcon_console && !have_boot_console) {
> +			if (printk_kthreads_running)
> +				ft->nbcon_offload = true;
> +			else
> +				ft->nbcon_atomic = true;
> +		}
> +
> +		/* Legacy consoles are flushed directly when possible. */
> +		if (have_legacy_console || have_boot_console) {
> +			if (!is_printk_legacy_deferred())
> +				ft->legacy_direct = true;
> +			else
> +				ft->legacy_offload = true;
> +		}
> +		break;
> +
>  	case NBCON_PRIO_EMERGENCY:
>  		if (have_nbcon_console && !have_boot_console)
>  			ft->nbcon_atomic = true;
> diff --git a/kernel/printk/nbcon.c b/kernel/printk/nbcon.c
> index 8745fffbfbb0..cebdb9936609 100644
> --- a/kernel/printk/nbcon.c
> +++ b/kernel/printk/nbcon.c
> @@ -1492,6 +1492,7 @@ static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
>  static void nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
>  					   bool allow_unsafe_takeover)
>  {
> +	struct console_flush_type ft;
>  	unsigned long flags;
>  	int err;
>  
> @@ -1521,10 +1522,12 @@ static void nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
>  
>  	/*
>  	 * If flushing was successful but more records are available, this
> -	 * context must flush those remaining records because there is no
> -	 * other context that will do it.
> +	 * context must flush those remaining records if the printer thread
> +	 * is not available do it.
>  	 */
> -	if (prb_read_valid(prb, nbcon_seq_read(con), NULL)) {
> +	printk_get_console_flush_type(&ft);
> +	if (!ft.nbcon_offload &&
> +	    prb_read_valid(prb, nbcon_seq_read(con), NULL)) {
>  		stop_seq = prb_next_reserve_seq(prb);
>  		goto again;
>  	}
> @@ -1752,17 +1755,19 @@ void nbcon_device_release(struct console *con)
>  
>  	/*
>  	 * This context must flush any new records added while the console
> -	 * was locked. The console_srcu_read_lock must be taken to ensure
> -	 * the console is usable throughout flushing.
> +	 * was locked if the printer thread is not available to do it. The
> +	 * console_srcu_read_lock must be taken to ensure the console is
> +	 * usable throughout flushing.
>  	 */
>  	cookie = console_srcu_read_lock();
> +	printk_get_console_flush_type(&ft);
>  	if (console_is_usable(con, console_srcu_read_flags(con), true) &&
> +	    !ft.nbcon_offload &&
>  	    prb_read_valid(prb, nbcon_seq_read(con), NULL)) {
>  		/*
>  		 * If nbcon_atomic flushing is not available, fallback to
>  		 * using the legacy loop.
>  		 */
> -		printk_get_console_flush_type(&ft);
>  		if (ft.nbcon_atomic) {
>  			__nbcon_atomic_flush_pending_con(con, prb_next_reserve_seq(prb), false);
>  		} else if (ft.legacy_direct) {
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index 55d75db00042..b9378636188e 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -2384,6 +2384,9 @@ asmlinkage int vprintk_emit(int facility, int level,
>  	if (ft.nbcon_atomic)
>  		nbcon_atomic_flush_pending();
>  
> +	if (ft.nbcon_offload)
> +		nbcon_kthreads_wake();
> +
>  	if (ft.legacy_direct) {
>  		/*
>  		 * The caller may be holding system-critical or
> @@ -2732,6 +2735,7 @@ void suspend_console(void)
>  
>  void resume_console(void)
>  {
> +	struct console_flush_type ft;
>  	struct console *con;
>  
>  	if (!console_suspend_enabled)
> @@ -2749,6 +2753,10 @@ void resume_console(void)
>  	 */
>  	synchronize_srcu(&console_srcu);
>  
> +	printk_get_console_flush_type(&ft);
> +	if (ft.nbcon_offload)
> +		nbcon_kthreads_wake();
> +
>  	pr_flush(1000, true);
>  }
>  
> @@ -3060,6 +3068,7 @@ static inline void printk_kthreads_check_locked(void) { }
>   */
>  static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handover)
>  {
> +	struct console_flush_type ft;
>  	bool any_usable = false;
>  	struct console *con;
>  	bool any_progress;
> @@ -3071,12 +3080,21 @@ static bool console_flush_all(bool do_cond_resched, u64 *next_seq, bool *handove
>  	do {
>  		any_progress = false;
>  
> +		printk_get_console_flush_type(&ft);
> +
>  		cookie = console_srcu_read_lock();
>  		for_each_console_srcu(con) {
>  			short flags = console_srcu_read_flags(con);
>  			u64 printk_seq;
>  			bool progress;
>  
> +			/*
> +			 * console_flush_all() is only for legacy consoles when
> +			 * the nbcon consoles have their printer threads.
> +			 */
> +			if ((flags & CON_NBCON) && ft.nbcon_offload)
> +				continue;

If I get it correctly then we could skip nbcon consoles here also when
ft.nbcon_atomic == true.

In this case, the messages are flushed directly from vprintk_emit() by
nbcon_atomic_flush_pending(). It goes down to
nbcon_atomic_flush_pending_con() which takes care also about parallel
printk() calls.

A question is whether we want this.

On one side, we want to separate the legacy code as much as possible.
And it should be needed only when there is either a boot or legacy
console.

On the other side, the legacy loop has the console_trylock_spinning()
which allows to pass the owner. While nbcon_atomic_flush_pending_con()
leaves the responsibility on the current owner.

Well, the printk kthreads should be started early enough to prevent
softlockups. I hope that the owner steeling trick won't be needed
for nbcon consoles.

In addition, nbcon_atomic_flush_pending_con() allows to flush
the messages directly even in NMI.

So, I think that we go in the right direction. I mean that we really
should handle nbcon consoles in the legacy loop only when there
is a boot console (both ft.nbcon_* == false).

> +
>  			if (!console_is_usable(con, flags, !do_cond_resched))
>  				continue;
>  			any_usable = true;

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ