lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Jan 2009 14:12:50 -0600
From:	Matt Mackall <mpm@...enic.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Nick Piggin <nickpiggin@...oo.com.au>,
	Pekka J Enberg <penberg@...helsinki.fi>,
	yanmin_zhang@...ux.intel.com, Andi Kleen <andi@...stfloor.org>,
	Matthew Wilcox <matthew@....cx>, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org
Subject: Re: [PATCH] SLUB: revert direct page allocator pass through

On Fri, 2009-01-23 at 10:03 -0500, Christoph Lameter wrote:
> On Fri, 23 Jan 2009, Nick Piggin wrote:
> 
> > Hmm, it lists quite a number of advantages that I guess are being
> > reverted too? What was the test case(s) that prompted this commit
> > in the first place? Better ensure it doesn't slow down...
> 
> The advantage was mainly memory savings and abilty to redefined kmallocs
> to go directly to the page allocator. Totally avoids slab allocator overhead.
> 
> I thought higher order allocations were not supposed to be used in
> performance critical paths? Didnt you want to do everything with order-0
> allocs?
> 
> It seems that we currently need the slab allocators to compensate for the
> performance problems in the page allocator for these higher order allocs.
> I'd rather have the page allocator fixed but things are as they are.

I still think we should experiment with changing the hierarchy:

Rename all the core get_free_page* functions to buddy_*

Make SL*B call into buddy_* with a default order of N (>=0)

Replace the old get_free_page* functions with simple wrappers that
call into SL*B for order <= N or buddy_* for order >= N

This tackles several problems at once:

- fragmentation of SL*B due to small pages
- poor performance of get_free_pages moderate orders
- poor cache-locality for get_free_pages

-- 
http://selenic.com : development and support for Mercurial and Linux


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ