[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150401151920.GB23824@dhcp22.suse.cz>
Date: Wed, 1 Apr 2015 17:19:20 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Dave Chinner <david@...morbit.com>
Cc: Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Huang Ying <ying.huang@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Theodore Ts'o <tytso@....edu>
Subject: Re: [patch 00/12] mm: page_alloc: improve OOM mechanism and policy
On Mon 30-03-15 11:32:40, Dave Chinner wrote:
> On Fri, Mar 27, 2015 at 11:05:09AM -0400, Johannes Weiner wrote:
[...]
> > GFP_NOFS sites are currently one of the sites that can deadlock inside
> > the allocator, even though many of them seem to have fallback code.
> > My reasoning here is that if you *have* an exit strategy for failing
> > allocations that is smarter than hanging, we should probably use that.
>
> We already do that for allocations where we can handle failure in
> GFP_NOFS conditions. It is, however, somewhat useless if we can't
> tell the allocator to try really hard if we've already had a failure
> and we are already in memory reclaim conditions (e.g. a shrinker
> trying to clean dirty objects so they can be reclaimed).
>
> From that perspective, I think that this patch set aims force us
> away from handling fallbacks ourselves because a) it makes GFP_NOFS
> more likely to fail, and b) provides no mechanism to "try harder"
> when we really need the allocation to succeed.
You can ask for this "try harder" by __GFP_HIGH flag. Would that help
in your fallback case?
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists