lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1203211253520.21932@router.home>
Date:	Wed, 21 Mar 2012 12:54:38 -0500 (CDT)
From:	Christoph Lameter <cl@...ux.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
cc:	Tejun Heo <tj@...nel.org>, Lai Jiangshan <laijs@...fujitsu.com>,
	Pekka Enberg <penberg@...nel.org>,
	Matt Mackall <mpm@...enic.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: Patch workqueue: create new slab cache instead of hacking

On Wed, 21 Mar 2012, Eric Dumazet wrote:

> On Wed, 2012-03-21 at 10:03 -0500, Christoph Lameter wrote:
> > On Wed, 21 Mar 2012, Eric Dumazet wrote:
> >
> > > Creating a dedicated cache for few objects ? Thats a lot of overhead, at
> > > least for SLAB (no merges of caches)
> >
> > Its some overhead for SLAB (a lot is what? If you tune down the per cpu
> > caches it should be a couple of pages) but its none for SLUB.
>
> SLAB overhead per cache is O(CPUS * nr_node_ids)  (unless alien caches
> are disabled)

nr_node_ids==2 in the standard case these days. Alien caches are minimal.

> For few in flight objects, its just better to use standard kmalloc-xxxx
> caches.

Its easier to use a custom slab cache. Avoids hackery like we have in
workqueue.c

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ