lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1272332477.2078.674.camel@ymzhang.sh.intel.com>
Date:	Tue, 27 Apr 2010 09:41:17 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	Pekka Enberg <penberg@...helsinki.fi>
Cc:	Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Kernel Testers List <kernel-testers@...r.kernel.org>,
	Maciej Rutecki <maciej.rutecki@...il.com>,
	Alex Shi <alex.shi@...el.com>, tim.c.chen@...el.com
Subject: Re: [Bug #15713] hackbench regression due to commit 9dfc6e68bfe6e

On Mon, 2010-04-26 at 13:09 +0300, Pekka Enberg wrote:
> Hi,
> 
> On Mon, Apr 26, 2010 at 9:59 AM, Zhang, Yanmin
> <yanmin_zhang@...ux.intel.com> wrote:
> >>>> I haven't been able to reproduce this either on my Core 2 machine.
> >>> Mostly, the regression exists on Nehalem machines. I suspect it's related to
> >>> hyper-threading machine.
> 
> On 04/26/2010 09:22 AM, Pekka Enberg wrote:
> >> OK, so does anyone know why hyper-threading would change things for
> >> the per-CPU allocator?
> 
> On Mon, Apr 26, 2010 at 1:02 PM, Tejun Heo <tj@...nel.org> wrote:
> > My wild speculation is that previously the cpu_slub structures of two
> > neighboring threads ended up on the same cacheline by accident thanks
> > to the back to back allocation.  W/ the percpu allocator, this no
> > longer would happen as the allocator groups percpu data together
> > per-cpu.
> 
> Yanmin, do we see a lot of remote frees for your hackbench run? IIRC,
> it's the "deactivate_remote_frees" stat when CONFIG_SLAB_STATS is
> enabled.

After runing the testing with 2.6.34-rc5:

#slabinfo -AD
Name                   Objects      Alloc       Free   %Fast Fallb O
skbuff_head_cache         2518  800011810  800009770  95  19     0 1
kmalloc-512               1101  800009118  800008441  95  19     0 2
anon_vma_chain            2500     195878     194477  98  13     0 0
vm_area_struct            2487     160755     158908  97  20     0 1
anon_vma                  2645      88626      87637  99  12     0 0

[ymzhang@...-ne01 ~]$ cat /sys/kernel/slab/skbuff_head_cache/deactivate_remote_frees
1 C13=1
[ymzhang@...-ne01 ~]$ cat /sys/kernel/slab/kmalloc-512/deactivate_remote_frees       
3 C8=2 C15=1


After running testing against 2.6.33 kernel:
#slabinfo -AD
Name                   Objects      Alloc       Free   %Fast Fallb O
kmalloc-1024               961  800011628  800011167  93   1     0 3
skbuff_head_cache         2518  800012055  800010015  93   1     0 1
vm_area_struct            2892     162196     159987  97  19     0 1
names_cache                128      47139      47141  99  97     0 3
kmalloc-64                3612      40180      37287  99  89     0 0
Acpi-State                 816      36301      36301  99  98     0 0

I remember with 2.6.34-rc1, the fast alloc/free are close to the one of 2.6.33.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ