[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160613151726.GL6518@dhcp22.suse.cz>
Date: Mon, 13 Jun 2016 17:17:26 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org, mgorman@...e.de,
vbabka@...e.cz, hannes@...xchg.org, riel@...hat.com,
david@...morbit.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/2] mm, tree wide: replace __GFP_REPEAT by
__GFP_RETRY_HARD with more useful semantic
On Mon 13-06-16 23:54:13, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Sat 11-06-16 23:35:49, Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > On Tue 07-06-16 21:11:03, Tetsuo Handa wrote:
> > > > > Remaining __GFP_REPEAT users are not always doing costly allocations.
> > > >
> > > > Yes but...
> > > >
> > > > > Sometimes they pass __GFP_REPEAT because the size is given from userspace.
> > > > > Thus, unconditional s/__GFP_REPEAT/__GFP_RETRY_HARD/g is not good.
> > > >
> > > > Would that be a regression though? Strictly speaking the __GFP_REPEAT
> > > > documentation was explicit to not loop for ever. So nobody should have
> > > > expected nofail semantic pretty much by definition. The fact that our
> > > > previous implementation was not fully conforming to the documentation is
> > > > just an implementation detail. All the remaining users of __GFP_REPEAT
> > > > _have_ to be prepared for the allocation failure. So what exactly is the
> > > > problem with them?
> > >
> > > A !costly allocation becomes weaker than now if __GFP_RETRY_HARD is passed.
> >
> > That is true. But it is not weaker than the __GFP_REPEAT actually ever
> > promissed. __GFP_REPEAT explicitly said to not retry _for_ever_. The
> > fact that we have ignored it is sad but that is what I am trying to
> > address here.
>
> Whatever you rename __GFP_REPEAT to, it sounds strange to me that !costly
> __GFP_REPEAT allocations are weaker than !costly !__GFP_REPEAT allocations.
> Are you planning to make !costly !__GFP_REPEAT allocations to behave like
> __GFP_NORETRY?
The patch description tries to explain the difference:
__GFP_NORETRY doesn't retry at all
__GFP_RETRY_HARD retries as hard as feasible
__GFP_NOFAIL tells the retry for ever
all of them regardless of the order. This is the way how to tell the
allocator to change its default behavior which might be, and actually
is, different depending on the order.
[...]
> > > That _somebody_ might release oom_lock without invoking the OOM killer (e.g.
> > > doing !__GFP_FS allocation), which means that we have reached the OOM condition
> > > and nobody is actually handling the OOM on our behalf. __GFP_RETRY_HARD becomes
> > > as weak as __GFP_NORETRY. I think this is a regression.
> >
> > I really fail to see your point. We are talking about a gfp flag which
> > tells the allocator to retry as much as it is feasible. Getting through
> > all the reclaim attempts two times without any progress sounds like a
> > fair criterion. Well, we could try $NUM times but that wouldn't make too
> > much difference to what you are writing above. The fact whether somebody
> > has been killed or not is not really that important IMHO.
>
> If all the reclaim attempt first time made no progress, all the reclaim
> attempt second time unlikely make progress unless the OOM killer kills
> something. Thus, doing all the reclaim attempts two times without any progress
> without killing somebody sounds almost equivalent to doing all the reclaim
> attempt only once.
Yes, that is possible. You might have a GFP_NOFS only load where nothing
really invokes the OOM killer. Does that actually matter, though? The
semantic of the flag is to retry hard while the page allocator believes
it can make a forward progress. But not for ever. We never know whether
a progress is possible at all. We have certain heuristics when to give
up, try to invoke OOM killer and try again hoping things have changed.
This is not much different except we declare that no hope to getting to
the OOM point again without being able to succeed. Are you suggesting
a more precise heuristic? Or do you claim that we do not need a flag
which would put a middle ground between __GFP_NORETRY and __GFP_NOFAIL
which are on the extreme sides?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists