lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0805141100110.15633@schroedinger.engr.sgi.com>
Date:	Wed, 14 May 2008 11:03:18 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Andi Kleen <andi@...stfloor.org>
cc:	Pekka Enberg <penberg@...helsinki.fi>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Mel Gorman <mel@...net.ie>, mpm@...enic.com,
	Matthew Wilcox <matthew@....cx>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Subject: Re: [patch 21/21] slab defrag: Obsolete SLAB

On Wed, 14 May 2008, Andi Kleen wrote:

> iirc profiling analysis showed that the problem was the page lock
> serialization (in particular the slab_lock() in __slab_free). That
> was on 2.6.24.2

Do you have an URL?

> I think the problem is that this atomic operation thrashes cache lines
> around. Really counting cycles on instructions is not that interesting,
> but minimizing the cache thrashing is. And for that it looks like slub
> is worse.

It can thrash cachelines if objects from the same slab page are freed 
simultaneously on multiple processors. That occurred in the hackbench 
regression that we addressed with the dynamic configuration of slab sizes.

However, typically long lived objects freed from multiple processors 
belong to different slab caches.

> > So I think that the free need to  stay as is. The disadvantages in terms 
> > of the complexity of handling the objects and expiring them and the issue 
> > of having to take per node locks in SLAB makes it hard to justify adding a 
> > queue for free in SLUB. Maybe someone has an inspiration on how to do this 
> > effective that is better than my attempts which always ultimately ended 
> > implementing code that thad the same issues that we have in SLAB.
> 
> What is the big problem of having a batched free queue? If the expiry
> is done at a good bounded time (e.g. on interrupt exit or similar)
> locally on the CPU it shouldn't be a big issue, should it?

Interrupt exit in general would have to inspect the per cpu structures of 
all slab caches on the system?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ