[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091115120721.GA7557@bizet.domek.prywatny>
Date: Sun, 15 Nov 2009 13:07:21 +0100
From: Karol Lewandowski <karol.k.lewandowski@...il.com>
To: Mel Gorman <mel@....ul.ie>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Frans Pop <elendil@...net.nl>, Jiri Kosina <jkosina@...e.cz>,
Sven Geggus <lists@...hsschwanzdomain.de>,
Karol Lewandowski <karol.k.lewandowski@...il.com>,
Tobias Oetiker <tobi@...iker.ch>, linux-kernel@...r.kernel.org,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Rik van Riel <riel@...hat.com>,
Christoph Lameter <cl@...ux-foundation.org>,
Stephan von Krawczynski <skraw@...net.com>,
"Rafael J. Wysocki" <rjw@...k.pl>,
Kernel Testers List <kernel-testers@...r.kernel.org>
Subject: Re: [PATCH 0/5] Reduce GFP_ATOMIC allocation failures, candidate
fix V3
On Thu, Nov 12, 2009 at 07:30:30PM +0000, Mel Gorman wrote:
> [Bug #14265] ifconfig: page allocation failure. order:5, mode:0x8020 w/ e100
> Patches 1-3 should be tested first. The testing I've done shows that the
> page allocator and behaviour of congestion_wait() is more in line with
> 2.6.30 than the vanilla kernels.
>
> It'd be nice to have 2 more tests, applying each patch on top noting any
> behaviour change. i.e. ideally there would be results for
>
> o patches 1+2+3
> o patches 1+2+3+4
> o patches 1+2+3+4+5
>
> Of course, any tests results are welcome. The rest of the mail is the
> results of my own tests.
I've tried testing 3+4+5 against 2.6.32-rc7 (1+2 seem to be in
mainline) and got failure. I've noticed something strange (I think).
I was unable to trigger failures when system was under heavy memory
pressure (i.e. my testing - gitk, firefoxes, etc.). When I killed
almost all memory hogs, put system into sleep and resumed -- it
failed. free(1) showed:
total used free shared buffers cached
Mem: 255240 194052 61188 0 4040 49364
-/+ buffers/cache: 140648 114592
Swap: 514040 72712 441328
Is that ok? Wild guess -- maybe kswapd doesn't take fragmentation (or
other factors) into account as hard as it used to in 2.6.30?
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists