[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1002151355000.26927@chino.kir.corp.google.com>
Date: Mon, 15 Feb 2010 14:01:17 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Nick Piggin <npiggin@...e.de>,
Andrea Arcangeli <aarcange@...hat.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Lubos Lunak <l.lunak@...e.cz>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [patch 6/7 -mm] oom: avoid oom killer for lowmem allocations
On Mon, 15 Feb 2010, KAMEZAWA Hiroyuki wrote:
> > I can't agree with that assessment, I don't think it's a desired result to
> > ever panic the machine regardless of what /proc/sys/vm/panic_on_oom is set
> > to because a lowmem page allocation fails especially considering, as
> > mentioned in the changelog, these allocations are never __GFP_NOFAIL and
> > returning NULL is acceptable.
> >
> please add
> WARN_ON((high_zoneidx < ZONE_NORMAL) && (gfp_mask & __GFP_NOFAIL))
> somewhere. Then, it seems your patch makes sense.
>
high_zoneidx < ZONE_NORMAL is not the only case where this exists: it
exists for __GFP_NOFAIL allocations that are not __GFP_FS as well and has
for years, no special handling is now needed.
There should be no cases of either (GFP_DMA | __GFP_NOFAIL, or
GFP_NOFS | __GFP_NOFAIL) in my audit of the kernel code. And since
__GFP_NOFAIL is not to be added anymore (see Andrew's dab48dab), there's
no real reason to add a WARN_ON() here.
> I don't like the "possibility" of inifinte loops.
>
The possibility of infinite loops has always existed in the page allocator
for __GFP_NOFAIL allocations, that's precisely why it's deprecated and
eventually we seek to remove it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists