[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090624120617.1e6799b5.akpm@linux-foundation.org>
Date: Wed, 24 Jun 2009 12:06:17 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: penberg@...helsinki.fi, arjan@...radead.org,
linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
npiggin@...e.de
Subject: Re: upcoming kerneloops.org item: get_page_from_freelist
On Wed, 24 Jun 2009 11:42:33 -0700 (PDT)
Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> So I'd suggest just doing this..
>
> Linus
> ---
> mm/page_alloc.c | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index aecc9cd..5d714f8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1153,10 +1153,10 @@ again:
> * properly detect and handle allocation failures.
> *
> * We most definitely don't want callers attempting to
> - * allocate greater than single-page units with
> + * allocate greater than order-1 page units with
> * __GFP_NOFAIL.
> */
> - WARN_ON_ONCE(order > 0);
> + WARN_ON_ONCE(order > 1);
> }
> spin_lock_irqsave(&zone->lock, flags);
> page = __rmqueue(zone, order, migratetype);
Well. What is our overall objective here?
My original patch was motiviated by the horror at discovering that
we're using this thing (which was _never_ supposed to have new users)
for order>0 allocations. We've gone backwards.
Ideally, we'd fix all callers to handle allocation faliures then remove
__GFP_NOFAIL. But I don't know how to fix JBD.
So perhaps we should just revert that WARN_ON altogether, and I can go
on a little grep-empowered rampage, see if we can remove some of these
callsites.
It's not a huge problem, btw. I don't think I've ever seen a report of
a machine getting stuck in a __GFP_NOFAIL allocation attempt. But from
a design perspective it's Just Wrong.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists