[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0905110143190.24726@chino.kir.corp.google.com>
Date: Mon, 11 May 2009 01:45:56 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...e.de>,
Nick Piggin <npiggin@...e.de>, Mel Gorman <mel@....ul.ie>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Christoph Lameter <cl@...ux-foundation.org>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
San Mehat <san@...roid.com>, Arve Hjonnevag <arve@...roid.com>,
linux-kernel@...r.kernel.org
Subject: Re: [patch 08/11 -mmotm] oom: invoke oom killer for __GFP_NOFAIL
On Mon, 11 May 2009, KOSAKI Motohiro wrote:
> > What exactly are you objecting to? You don't want the oom killer called
> > for a __GFP_NOFAIL allocation above PAGE_ALLOC_COSTLY_ORDER that could not
> > reclaim any memory and would prefer that it loop endlessly in the page
> > allocator? What's the purpose of that if the oom killer could free a very
> > large memory hogging task?
>
> My point is, if we change gfp-flags meaning, we should change
> unintentional affected caller.
>
> Do you oppose this?
>
include/linux/gfp.h states this:
* __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
* cannot handle allocation failures.
That is the only desciption given to users of __GFP_NOFAIL, so they should
be able to trust it. The fact is that in mmotm it's possible for such an
allocation to fail without even attempting to free some memory via the oom
killer (and I disagree that killing a large memory hogging task will not
allow large allocations such as those greater than PAGE_ALLOC_COSTLY_ORDER
to succeed, which is a question of fragmentation and not purely VM size).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists