lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 7 Apr 2009 17:31:57 -0400 (EDT)
From:	Christoph Lameter <cl@...ux.com>
To:	Pekka Enberg <penberg@...helsinki.fi>
cc:	David Rientjes <rientjes@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [patch] slub: default min_partial to at least highest cpus per
  node

On Tue, 7 Apr 2009, Pekka Enberg wrote:

> Christoph Lameter wrote:
> > I am not sure about the benefit of this. If the page allocator finally
> > improves in performance then we can reduce the mininum number of partial
> > slabs kept and also do pass through for all sizes >2k.
>
> Sure, we can do that, but it still makes sense to adjust the partial list
> sizes based on per-node CPU count. So I'm inclined to apply the patch I
> suggested if it shows performance gains on one or more of the relevant
> benchmarks (sysbench, tbench, hackbench, netperf, etc.).

The partial list sizes cannot be adjusted. There is just a mininum size
that is set via MIN_PARTIAL. Currently we use that to avoid trips to the
page allocator by keeping the list artificially long. This should not be
done because it increases memory overhead by keeping slab pages around
with no objects. If the page allocator trips are not that expensive
anymore then there is no reason to keep these slab pages around. A hot
page can be reused by any other subsystemm in the kernel.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ