lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1270711115.2078.385.camel@ymzhang.sh.intel.com>
Date:	Thu, 08 Apr 2010 15:18:35 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	netdev <netdev@...r.kernel.org>, Tejun Heo <tj@...nel.org>,
	Pekka Enberg <penberg@...helsinki.fi>, alex.shi@...el.com,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Ma, Ling" <ling.ma@...el.com>,
	"Chen, Tim C" <tim.c.chen@...el.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: hackbench regression due to commit 9dfc6e68bfe6e

On Wed, 2010-04-07 at 11:43 -0500, Christoph Lameter wrote:
> On Wed, 7 Apr 2010, Zhang, Yanmin wrote:
> 
> > I collected retired instruction, dtlb miss and LLC miss.
> > Below is data of LLC miss.
> >
> > Kernel 2.6.33:
> >     20.94%        hackbench  [kernel.kallsyms]                                       [k] copy_user_generic_string
> >     14.56%        hackbench  [kernel.kallsyms]                                       [k] unix_stream_recvmsg
> >     12.88%        hackbench  [kernel.kallsyms]                                       [k] kfree
> >      7.37%        hackbench  [kernel.kallsyms]                                       [k] kmem_cache_free
> >      7.18%        hackbench  [kernel.kallsyms]                                       [k] kmem_cache_alloc_node
> >      6.78%        hackbench  [kernel.kallsyms]                                       [k] kfree_skb
> >      6.27%        hackbench  [kernel.kallsyms]                                       [k] __kmalloc_node_track_caller
> >      2.73%        hackbench  [kernel.kallsyms]                                       [k] __slab_free
> >      2.21%        hackbench  [kernel.kallsyms]                                       [k] get_partial_node
> >      2.01%        hackbench  [kernel.kallsyms]                                       [k] _raw_spin_lock
> >      1.59%        hackbench  [kernel.kallsyms]                                       [k] schedule
> >      1.27%        hackbench  hackbench                                               [.] receiver
> >      0.99%        hackbench  libpthread-2.9.so                                       [.] __read
> >      0.87%        hackbench  [kernel.kallsyms]                                       [k] unix_stream_sendmsg
> >
> > Kernel 2.6.34-rc3:
> >     18.55%        hackbench  [kernel.kallsyms]                                                     [k] copy_user_generic_str
> > ing
> >     13.19%        hackbench  [kernel.kallsyms]                                                     [k] unix_stream_recvmsg
> >     11.62%        hackbench  [kernel.kallsyms]                                                     [k] kfree
> >      8.54%        hackbench  [kernel.kallsyms]                                                     [k] kmem_cache_free
> >      7.88%        hackbench  [kernel.kallsyms]                                                     [k] __kmalloc_node_track_
> > caller
> 
> Seems that the overhead of __kmalloc_node_track_caller was increased. The
> function inlines slab_alloc().
> 
> >      6.54%        hackbench  [kernel.kallsyms]                                                     [k] kmem_cache_alloc_node
> >      5.94%        hackbench  [kernel.kallsyms]                                                     [k] kfree_skb
> >      3.48%        hackbench  [kernel.kallsyms]                                                     [k] __slab_free
> >      2.15%        hackbench  [kernel.kallsyms]                                                     [k] _raw_spin_lock
> >      1.83%        hackbench  [kernel.kallsyms]                                                     [k] schedule
> >      1.82%        hackbench  [kernel.kallsyms]                                                     [k] get_partial_node
> >      1.59%        hackbench  hackbench                                                             [.] receiver
> >      1.37%        hackbench  libpthread-2.9.so                                                     [.] __read
> 
> I wonder if this is not related to the kmem_cache_cpu structure straggling
> cache line boundaries under some conditions. On 2.6.33 the kmem_cache_cpu
> structure was larger and therefore tight packing resulted in different
> alignment.
> 
> Could you see how the following patch affects the results. It attempts to
> increase the size of kmem_cache_cpu to a power of 2 bytes. There is also
> the potential that other per cpu fetches to neighboring objects affect the
> situation. We could cacheline align the whole thing.
I tested the patch against 2.6.33+9dfc6e68bfe6e and it seems it doesn't help.

I dumped percpu allocation info when booting kernel and didn't find clear sign.

> 
> ---
>  include/linux/slub_def.h |    5 +++++
>  1 file changed, 5 insertions(+)
> 
> Index: linux-2.6/include/linux/slub_def.h
> ===================================================================
> --- linux-2.6.orig/include/linux/slub_def.h	2010-04-07 11:33:50.000000000 -0500
> +++ linux-2.6/include/linux/slub_def.h	2010-04-07 11:35:18.000000000 -0500
> @@ -38,6 +38,11 @@ struct kmem_cache_cpu {
>  	void **freelist;	/* Pointer to first free per cpu object */
>  	struct page *page;	/* The slab from which we are allocating */
>  	int node;		/* The node of the page (or -1 for debug) */
> +#ifndef CONFIG_64BIT
> +	int dummy1;
> +#endif
> +	unsigned long dummy2;
> +
>  #ifdef CONFIG_SLUB_STATS
>  	unsigned stat[NR_SLUB_STAT_ITEMS];
>  #endif


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ