lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200710251234.01440.nickpiggin@yahoo.com.au>
Date:	Thu, 25 Oct 2007 12:34:01 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Christoph Lameter <clameter@....com>
Cc:	Alexey Dobriyan <adobriyan@...il.com>, Mel Gorman <mel@...net.ie>,
	Pekka Enberg <penberg@...helsinki.fi>,
	linux-kernel@...r.kernel.org, linux-mm@...r.kernel.org
Subject: Re: SLUB 0:1 SLAB (OOM during massive parallel kernel builds)

On Thursday 25 October 2007 12:15, Christoph Lameter wrote:
> On Wed, 24 Oct 2007, Alexey Dobriyan wrote:
> > [12728.701398] DMA free:8032kB min:32kB low:40kB high:48kB active:2716kB
> > inactive:2208kB present:12744kB pages_scanned:9299 all_unreclaimable?
> > yes [12728.701567] lowmem_reserve[]: 0 2003 2003 2003 [12728.701654]
>
> Ummm... all unreclaimable is set! Are you mlocking the pages in memory? Or
> what causes this? All pages under writeback? What is the dirty ratio set
> to?

Why is SLUB behaving differently, though.

Memory efficiency wouldn't be the reason, would it? I mean, SLUB
should be more efficient than SLAB, plus have less data lying around
in queues.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ