[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242057793.8109.34342.camel@localhost.localdomain>
Date: Mon, 11 May 2009 09:03:13 -0700
From: Dave Hansen <dave@...ux.vnet.ibm.com>
To: David Rientjes <rientjes@...gle.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...e.de>,
Nick Piggin <npiggin@...e.de>, Mel Gorman <mel@....ul.ie>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Christoph Lameter <cl@...ux-foundation.org>,
San Mehat <san@...roid.com>, Arve Hjonnevag <arve@...roid.com>,
linux-kernel@...r.kernel.org
Subject: Re: [patch 08/11 -mmotm] oom: invoke oom killer for __GFP_NOFAIL
On Mon, 2009-05-11 at 01:45 -0700, David Rientjes wrote:
> On Mon, 11 May 2009, KOSAKI Motohiro wrote:
> include/linux/gfp.h states this:
>
> * __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
> * cannot handle allocation failures.
>
> That is the only desciption given to users of __GFP_NOFAIL, so they should
> be able to trust it. The fact is that in mmotm it's possible for such an
> allocation to fail without even attempting to free some memory via the oom
> killer (and I disagree that killing a large memory hogging task will not
> allow large allocations such as those greater than PAGE_ALLOC_COSTLY_ORDER
> to succeed, which is a question of fragmentation and not purely VM size).
I assume that you've actually seen this behavior where OOM-killing a
task will free enough memory to allow a higher-order allocation to
succeed.
Could you explain a little more about why you think this scenario works
for you? Are large contiguous areas of memory pinned by the task
getting which you want to get killed? Why wasn't swapping effective
against this task? Was the task itself taking up a large portion of
total memory?
-- Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists