lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1453798109.17181.70.camel@tiscali.nl>
Date:	Tue, 26 Jan 2016 09:48:29 +0100
From:	Paul Bolle <pebolle@...cali.nl>
To:	Laura Abbott <labbott@...oraproject.org>
Cc:	Christoph Lameter <cl@...ux.com>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Joonsoo Kim <js1304@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, kernel-hardening@...ts.openwall.com,
	Kees Cook <keescook@...omium.org>
Subject: Re: [RFC][PATCH 2/3] slub: Don't limit debugging to slow paths

On ma, 2016-01-25 at 17:15 -0800, Laura Abbott wrote:
> --- a/init/Kconfig
> +++ b/init/Kconfig
 
> +config SLUB_DEBUG_FASTPATH
> +	bool "Allow SLUB debugging to utilize the fastpath"
> +	depends on SLUB_DEBUG
> +	help
> +	  SLUB_DEBUG forces all allocations to utilize the slow path which
> +	  is a performance penalty. Turning on this option lets the debugging
> +	  use the fast path. This helps the performance when debugging
> +	  features are turned on. If you aren't planning on utilizing any
> +	  of the SLUB_DEBUG features, you should say N here.
> +
> +	  If unsure, say N

> --- a/mm/slub.c
> +++ b/mm/slub.c

> +#ifdef SLUB_DEBUG_FASTPATH

I have no clue what your patch does, but I could spot this should
probably be
	#ifdef CONFIG_SLUB_DEBUG_FASTPATH

> +static noinline int alloc_debug_processing_fastpath(struct kmem_cache
> *s,
> +					struct kmem_cache_cpu *c,
> +					struct page *page,
> +					void *object, unsigned long
> tid,
> +					unsigned long addr)
> +{
> +	unsigned long flags;
> +	int ret = 0;
> +
> +	preempt_disable();
> +	local_irq_save(flags);
> +
> +	/*
> +	 * We've now disabled preemption and IRQs but we still need
> +	 * to check that this is the right CPU
> +	 */
> +	if (!this_cpu_cmpxchg_double(s->cpu_slab->freelist, s
> ->cpu_slab->tid,
> +				c->freelist, tid,
> +				c->freelist, tid))
> +		goto out;
> +
> +	ret = alloc_debug_processing(s, page, object, addr);
> +
> +out:
> +	local_irq_restore(flags);
> +	preempt_enable();
> +	return ret;
> +}
> +#else
> +static noinline int alloc_debug_processing_fastpath(struct kmem_cache
> *s,
> +					struct kmem_cache_cpu *c,
> +					struct page *page,
> +					void *object, unsigned long
> tid,
> +					unsigned long addr)
> +{
> +	return 1;
> +}
> +#endif

Thanks,


Paul Bolle

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ