[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0905111557360.5979@chino.kir.corp.google.com>
Date: Mon, 11 May 2009 16:00:58 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: gregkh@...e.de, npiggin@...e.de, mel@....ul.ie,
a.p.zijlstra@...llo.nl, cl@...ux-foundation.org,
dave@...ux.vnet.ibm.com, san@...roid.com, arve@...roid.com,
linux-kernel@...r.kernel.org
Subject: Re: [patch 08/11 -mmotm] oom: invoke oom killer for __GFP_NOFAIL
On Mon, 11 May 2009, Andrew Morton wrote:
> oh, well that was pretty useless then. I was trying to find a handy
> spot where we can avoid adding fastpath cycles.
>
> How about we sneak it into the order>0 leg inside buffered_rmqueue()?
>
Wouldn't it be easier after my patch is merged to just check the oom
killer stack traces for such allocations and people complain about
unnecessary oom killing when memory is available but too fragmented? The
gfp_flags and order are shown in the oom killer header.
>
> --- a/mm/page_alloc.c~page-allocator-warn-if-__gfp_nofail-is-used-for-a-large-allocation
> +++ a/mm/page_alloc.c
> @@ -1130,6 +1130,20 @@ again:
> list_del(&page->lru);
> pcp->count--;
> } else {
> + if (unlikely(gfp_mask & __GFP_NOFAIL)) {
> + /*
> + * __GFP_NOFAIL is not to be used in new code.
> + *
> + * All __GFP_NOFAIL callers should be fixed so that they
> + * properly detect and handle allocation failures.
> + *
> + * We most definitely don't want callers attempting to
> + * allocate greater than single-page units with
> + * __GFP_NOFAIL.
> + */
> + WARN_ON_ONCE(order > 0);
> + return 0;
> + }
> spin_lock_irqsave(&zone->lock, flags);
> page = __rmqueue(zone, order, migratetype);
> __mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << order));
That "return 0" definitely needs to be removed, though :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists