lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161006130849.GG13369@pathway.suse.cz>
Date:   Thu, 6 Oct 2016 15:08:49 +0200
From:   Petr Mladek <pmladek@...e.com>
To:     Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Cc:     Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
        Tejun Heo <tj@...nel.org>, Calvin Owens <calvinowens@...com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Steven Rostedt <rostedt@...dmis.org>,
        linux-kernel@...r.kernel.org,
        Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [RFC][PATCHv2 3/7] printk: introduce per-cpu alt_print seq buffer

On Sat 2016-10-01 00:17:54, Sergey Senozhatsky wrote:
> This patch extends the idea of NMI per-cpu buffers to regions
> that may cause recursive printk() calls and possible deadlocks.

> diff --git a/kernel/printk/alt_printk.c b/kernel/printk/alt_printk.c
> index 7178661..4bc1e7d 100644
> --- a/kernel/printk/alt_printk.c
> +++ b/kernel/printk/alt_printk.c
>  	len = atomic_read(&s->len);
>  
> -	if (len >= sizeof(s->buffer)) {
> -		atomic_inc(&nmi_message_lost);
> +	if (len >= sizeof(s->buffer))
>  		return 0;
> -	}
>  
>  	/*
>  	 * Make sure that all old data have been read before the buffer was
> @@ -240,6 +235,83 @@ void alt_printk_flush_on_panic(void)
>  	alt_printk_flush();
>  }
>  
> +/*
> + * Safe printk() for NMI context. It uses a per-CPU buffer to
> + * store the message. NMIs are not nested, so there is always only
> + * one writer running. But the buffer might get flushed from another
> + * CPU, so we need to be careful.
> + */
> +static int vprintk_nmi(const char *fmt, va_list args)
> +{
> +	struct alt_printk_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
> +	int add;
> +
> +	add = alt_printk_log_store(s, fmt, args);
> +	if (!add)
> +		atomic_inc(&nmi_message_lost);

This would could also empty string as an error. A solution might be
update alt_printk_log_store() to return -1 in case of lost log.
Note that vprintk_nmi() still needs to return 0 in this case to
stay compatible with printk().

> +
> +	return add;
> +}
> +
> +void printk_nmi_enter(void)
> +{
> +	this_cpu_or(alt_printk_ctx, ALT_PRINTK_NMI_CONTEXT_MASK);
> +}
> +
> +void printk_nmi_exit(void)
> +{
> +	this_cpu_and(alt_printk_ctx, ~ALT_PRINTK_NMI_CONTEXT_MASK);
> +}
> +
> +/*
> + * Lockless printk(), to avoid deadlocks should the printk() recurse
> + * into itself. It uses a per-CPU buffer to store the message, just like
> + * NMI.
> + */
> +static int vprintk_alt(const char *fmt, va_list args)
> +{
> +	struct alt_printk_seq_buf *s = this_cpu_ptr(&alt_print_seq);
> +
> +	return alt_printk_log_store(s, fmt, args);

We should handle lost strings here as well. But it can be
done in a followup patch.

> +}
> +
> +/*
> + * Returns with local IRQs disabled.
> + * Can be preempted by NMI.
> + */
> +void alt_printk_enter(void)
> +{
> +	unsigned long flags;
> +	int entry_count;
> +
> +	local_irq_save(flags);
> +	if (!(this_cpu_read(alt_printk_ctx) & ALT_PRINTK_CONTEXT_MASK))
> +		this_cpu_write(alt_printk_irq_flags, flags);
> +	this_cpu_inc(alt_printk_ctx);
> +}
> +
> +/*
> + * Restores local IRQs state saved in alt_printk_enter().
> + * Can be preempted by NMI.
> + */
> +void alt_printk_exit(void)
> +{
> +	this_cpu_dec(alt_printk_ctx);
> +	if (!(this_cpu_read(alt_printk_ctx) & ALT_PRINTK_CONTEXT_MASK))
> +		local_irq_restore(this_cpu_read(alt_printk_irq_flags));
> +}

I will discuss this in your replay that explains the details.

Anyway, it looks much easier now.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ