lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0703091322140.16052@skynet.skynet.ie>
Date:	Fri, 9 Mar 2007 13:55:15 +0000 (GMT)
From:	Mel Gorman <mel@....ul.ie>
To:	Christoph Lameter <clameter@....com>
Cc:	akpm@...l.org, Marcelo Tosatti <marcelo@...ck.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	mpm@...enic.com, Manfred Spraul <manfred@...orfullife.com>
Subject: Re: [SLUB 0/3] SLUB: The unqueued slab allocator V4

On Thu, 8 Mar 2007, Christoph Lameter wrote:

> On Thu, 8 Mar 2007, Mel Gorman wrote:
>
>>> Note that the 16kb page size has a major
>>> impact on SLUB performance. On IA64 slub will use only 1/4th the locking
>>> overhead as on 4kb platforms.
>> It'll be interesting to see the kernbench tests then with debugging
>> disabled.
>
> You can get a similar effect on 4kb platforms by specifying slub_min_order=2 on bootup.
> This means that we have to rely on your patches to allow higher order
> allocs to work reliably though.

It should work out because of the way buddy always selects the minimum 
page size will tend to cluster the slab allocations together whether they 
are reclaimable or not. It's something I can investigate when slub has 
stabilised a bit.

However, in general, high order kernel allocations remain a bad idea. 
Depending on high order allocations that do not group could potentially 
lead to a situation where the movable areas are used more and more by 
kernel allocations. I cannot think of a workload that would actually break 
everything, but it's a possibility.

> The higher the order of slub the less
> locking overhead. So the better your patches deal with fragmentation the
> more we can reduce locking overhead in slub.
>

I can certainly kick it around a lot and see what happen. It's best that 
slub_min_order=2 remain an optional performance enhancing switch though.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ