[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.01.0906251339190.3605@localhost.localdomain>
Date: Thu, 25 Jun 2009 13:51:17 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: David Rientjes <rientjes@...gle.com>
cc: Theodore Tso <tytso@....edu>,
Andrew Morton <akpm@...ux-foundation.org>,
penberg@...helsinki.fi, arjan@...radead.org,
linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
npiggin@...e.de
Subject: Re: upcoming kerneloops.org item: get_page_from_freelist
On Thu, 25 Jun 2009, David Rientjes wrote:
>
> On Thu, 25 Jun 2009, Linus Torvalds wrote:
>
> > It might make more sense to make a __GFP_WAIT allocation set the
> > ALLOC_HARDER bit _if_ it repeats.
>
> This would make sense, but only for !__GFP_FS, since otherwise the oom
> killer will free some memory on an allowed node when reclaim fails and we
> don't otherwise want to deplete memory reserves.
So the reason I tend to like the kind of "incrementally try harder"
approaches is two-fold:
- it works well for balancing different choices against each other (like
on the freeing path, trying to see which kind of memory is most easily
freed by trying them all first in a "don't try very hard" mode)
- it's great for forcing _everybody_ to do part of the work (ie when some
new thread comes in and tries to allocate, the new thread starts off
with the lower priority, and as such won't steal a page that an older
allocator just freed)
And I think that second case is true even for the oom killer case, and
even for __GFP_FS.
So if people worry about oom, I would suggest that we should not think so
hard about the GFP_NOFAIL cases (which are relatively small and rare), or
about things like the above "try harder" when repeating model, but instead
think about what actually happens during oom: the most common allocations
will remain to the page allocations for user faults and/or page cache. In
fact, they get *more* common as you near OOM situation, because you get
into the whole swap/filemap thrashing situation where you have to re-read
the same pages over and over again.
So don't worry about NOFS. Instead, look at what GFP_USER and GFP_HIGHUSER
do. They set the __GFP_HARDWALL bit, and they _always_ check the end
result and fail gracefully and quickly when the allocation fails.
End result? Realistically, I suspect the _best_ thing we can do is to just
couple that bit with "we're out of memory", and just do something like
if (!did_some_progress && (gfp_flags & __GFP_HARDWALL))
goto nopage;
rather than anything else. And I suspect that if we do this, we can then
afford to retry very aggressively for the allocation cases that aren't
GFP_USER - and that may well be needed in order to make progress.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists