lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 25 Aug 2014 17:26:15 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
	Tejun Heo <htejun@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] mm/slab: use percpu allocator for cpu cache

On Thu, Aug 21, 2014 at 09:21:30AM -0500, Christoph Lameter wrote:
> On Thu, 21 Aug 2014, Joonsoo Kim wrote:
> 
> > So, this patch try to use percpu allocator in SLAB. This simplify
> > initialization step in SLAB so that we could maintain SLAB code more
> > easily.
> 
> I thought about this a couple of times but the amount of memory used for
> the per cpu arrays can be huge. In contrast to slub which needs just a
> few pointers, slab requires one pointer per object that can be in the
> local cache. CC Tj.
> 
> Lets say we have 300 caches and we allow 1000 objects to be cached per
> cpu. That is 300k pointers per cpu. 1.2M on 32 bit. 2.4M per cpu on
> 64bit.

Hello, Christoph.

Amount of memory we need to keep pointers for object is same in any case.
I know that percpu allocator occupy vmalloc space, so maybe we could
exhaust vmalloc space on 32 bit. 64 bit has no problem on it.
How many cores does largest 32 bit system have? Is it possible
to exhaust vmalloc space if we use percpu allocator?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ