lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0709042053320.7231@schroedinger.engr.sgi.com>
Date:	Tue, 4 Sep 2007 20:59:30 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
cc:	LKML <linux-kernel@...r.kernel.org>, mingo@...e.hu
Subject: Re: tbench regression - Why process scheduler has impact on tbench
 and why small per-cpu slab (SLUB) cache creates the scenario?

On Wed, 5 Sep 2007, Zhang, Yanmin wrote:

> 8) kmalloc-4096 order is 1 which means one slab consists of 2 objects. So a

You can change that by booting with slub_max_order=0. Then we can also use 
the per cpu queues to get these order 0 objects which may speed up the 
allocations because we do not have to take zone locks on slab allocation.

Note also that Andrew's tree has a page allocator pass through for SLUB 
for 4k kmallocs bypassing slab completely. That may also address the 
issue.

If you want SLUB to handle more objects in the 4k kmalloc cache 
without going to the page allocator then you can boot f.e. with

slub_max_order=3 slub_min_objects=8

which will result in a kmalloc-4096 that caches 8 objects.

> 	b) Change SLUB per-cpu slab cache, to cache more slabs instead of only one
> slab. This way could use page->lru to creates a list linked in kmem_cache->cpu_slab[]
> whose members need to be changed to as list_head. As for how many slabs could be in
> a per-cpu slab cache, it might be implemented as a sysfs parameter under /sys/slab/XXX/.
> Default could be 1 to satisfy big machines.

Try the ways to address the issue that I mentioned above.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ