lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84144f020904071209j638aae9bv406661ec401af7af@mail.gmail.com>
Date:	Tue, 7 Apr 2009 22:09:46 +0300
From:	Pekka Enberg <penberg@...helsinki.fi>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Christoph Lameter <cl@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch] slub: default min_partial to at least highest cpus per 
	node

David Rientjes wrote:
>> The pre-defined MIN_PARTIAL value may not be suitable for machines with a
>> large number of cpus per node.  To avoid excessively allocating new slabs
>> because there is not at least the same number of slabs on a node's
>> partial list as cpus, this will default a cache's min_partial value to be
>> at least the highest number of cpus per node on the system.
>>
>> This default will never exceed MAX_PARTIAL, however, so very large
>> systems don't waste an excessive amount of memory.
>>
>> When remote_node_defrag_ratio allows defragmenting remote nodes, it
>> ensures that nr_partial exceeds min_partial so there will always be a
>> local reserve when a cpu slab is filled to avoid allocating new slabs
>> locally as a result of a remote cpu stealing a partial slab.
>>
>> The cache's min_partial setting may still be changed by writing to
>> /sys/kernel/slab/cache/min_partial.  The only restriction when doing so
>> is that the value be within MIN_PARTIAL and MAX_PARTIAL.
>>
>> Cc: Christoph Lameter <cl@...ux-foundation.org>
>> Signed-off-by: David Rientjes <rientjes@...gle.com>

On Tue, Apr 7, 2009 at 9:58 PM, Pekka Enberg <penberg@...helsinki.fi> wrote:
> Hmm, partial lists are per-node, so wouldn't it be better to do the
> adjustment for every struct kmem_cache_node separately? The
> 'min_partial_per_node' global seems just too ugly and confusing to live
> with.

Btw, that requires moving ->min_partial to struct kmem_cache_node from
struct kmem_cache.  But I think that makes a whole lot of sense if
some nodes may have more CPUs than others.

And while the improvement is kinda obvious, I would be interested to
know what kind of workload benefits from this patch (and see numbers
if there are any).

                                  Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ