lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171031121032.lm3wxx3l5tkpo2ni@dhcp22.suse.cz>
Date:   Tue, 31 Oct 2017 13:10:32 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:     aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, rientjes@...gle.com,
        hannes@...xchg.org, mjaggi@...iumnetworks.com, mgorman@...e.de,
        oleg@...hat.com, vdavydov.dev@...il.com, vbabka@...e.cz
Subject: Re: [PATCH] mm,oom: Try last second allocation before and after
 selecting an OOM victim.

On Tue 31-10-17 19:40:09, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > > +struct page *alloc_pages_before_oomkill(struct oom_control *oc)
> > > +{
> > > +	/*
> > > +	 * Make sure that this allocation attempt shall not depend on
> > > +	 * __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation, for the caller is
> > > +	 * already holding oom_lock.
> > > +	 */
> > > +	const gfp_t gfp_mask = oc->gfp_mask & ~__GFP_DIRECT_RECLAIM;
> > > +	struct alloc_context *ac = oc->ac;
> > > +	unsigned int alloc_flags = gfp_to_alloc_flags(gfp_mask);
> > > +	const int reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
> > > +
> > > +	/* Need to update zonelist if selected as OOM victim. */
> > > +	if (reserve_flags) {
> > > +		alloc_flags = reserve_flags;
> > > +		ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
> > > +		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> > > +					ac->high_zoneidx, ac->nodemask);
> > > +	}
> > 
> > Why do we need this zone list rebuilding?
> 
> Why we do _not_ need this zone list rebuilding?
> 
> The reason I used __alloc_pages_slowpath() in alloc_pages_before_oomkill() is
> to avoid duplicating code (such as checking for ALLOC_OOM and rebuilding zone
> list) which needs to be maintained in sync with __alloc_pages_slowpath().
>
> If you don't like calling __alloc_pages_slowpath() from
> alloc_pages_before_oomkill(), I'm OK with calling __alloc_pages_nodemask()
> (with __GFP_DIRECT_RECLAIM/__GFP_NOFAIL cleared and __GFP_NOWARN set), for
> direct reclaim functions can call __alloc_pages_nodemask() (with PF_MEMALLOC
> set in order to avoid recursion of direct reclaim).
> 
> We are rebuilding zone list if selected as an OOM victim, for
> __gfp_pfmemalloc_flags() returns ALLOC_OOM if oom_reserves_allowed(current)
> is true.

So your answer is copy&paste without a deeper understanding, righ?

[...]

> The reason I'm proposing this "mm,oom: Try last second allocation before and
> after selecting an OOM victim." is that since oom_reserves_allowed(current) can
> become true when current is between post __gfp_pfmemalloc_flags(gfp_mask) and
> pre mutex_trylock(&oom_lock), an OOM victim can fail to try ALLOC_OOM attempt
> before selecting next OOM victim when MMF_OOM_SKIP was set quickly.

ENOPARSE. I am not even going to finish this email sorry. This is way
beyond my time budget.

Can you actually come with something that doesn't make ones head explode
and yet describe what the actual problem is and how you deal with it?

E.g something like this
"
OOM killer is invoked after all the reclaim attempts have failed and
there doesn't seem to be a viable chance for the situation to change.
__alloc_pages_may_oom tries to reduce chances of a race during OOM
handling by taking oom lock so only one caller is allowed to really
invoke the oom killer.

__alloc_pages_may_oom also tries last time ALLOC_WMARK_HIGH allocation
request before really invoking out_of_memory handler. This has two
motivations. The first one is explained by the comment and it aims to
catch potential parallel OOM killing and the second one was explained by
Andrea Arcangeli as follows:
: Elaborating the comment: the reason for the high wmark is to reduce
: the likelihood of livelocks and be sure to invoke the OOM killer, if
: we're still under pressure and reclaim just failed. The high wmark is
: used to be sure the failure of reclaim isn't going to be ignored. If
: using the min wmark like you propose there's risk of livelock or
: anyway of delayed OOM killer invocation.

While both have some merit, the first reason is mostly historical
because we have the explicit locking now and it is really unlikely that
the memory would be available right after we have given up trying.
Last attempt allocation makes some sense of course but considering that
the oom victim selection is quite an expensive operation which can take
a considerable amount of time it makes much more sense to retry the
allocation after the most expensive part rather than before. Therefore
move the last attempt right before we are trying to kill an oom victim
to rule potential races when somebody could have freed a lot of memory
in the meantime. This will reduce the time window for potentially
pre-mature OOM killing considerably.
"
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ