lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 28 Sep 2007 11:20:23 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc:	Nick Piggin <nickpiggin@...oo.com.au>,
	Christoph Hellwig <hch@....de>, Mel Gorman <mel@...net.ie>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	David Chinner <dgc@....com>, Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [15/17] SLUB: Support virtual fallback via SLAB_VFALLBACK

On Fri, 28 Sep 2007, Peter Zijlstra wrote:

> 
> On Fri, 2007-09-28 at 10:33 -0700, Christoph Lameter wrote:
> 
> > Again I have not seen any fallbacks to vmalloc in my testing. What we are 
> > doing here is mainly to address your theoretical cases that we so far have 
> > never seen to be a problem and increase the reliability of allocations of
> > page orders larger than 3 to a usable level. So far I have so far not 
> > dared to enable orders larger than 3 by default.
> 
> take a recent -mm kernel, boot with mem=128M.

Ok so only 32k pages to play with? I have tried parallel kernel compiles 
with mem=256m and they seemed to be fine.

> start 2 processes that each mmap a separate 64M file, and which does
> sequential writes on them. start a 3th process that does the same with
> 64M anonymous.
> 
> wait for a while, and you'll see order=1 failures.

Really? That means we can no longer even allocate stacks for forking.

Its surprising that neither lumpy reclaim nor the mobility patches can 
deal with it? Lumpy reclaim should be able to free neighboring pages to 
avoid the order 1 failure unless there are lots of pinned pages.

I guess then that lots of pages are pinned through I/O?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ