lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Mar 2016 20:46:48 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	mhocko@...nel.org, rientjes@...gle.com
Cc:	linux-mm@...ck.org, hannes@...xchg.org, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] mm, oom: move GFP_NOFS check to out_of_memory

Michal Hocko wrote:
> On Tue 29-03-16 15:13:54, David Rientjes wrote:
> > On Tue, 29 Mar 2016, Michal Hocko wrote:
> > 
> > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > > index 86349586eacb..1c2b7a82f0c4 100644
> > > --- a/mm/oom_kill.c
> > > +++ b/mm/oom_kill.c
> > > @@ -876,6 +876,10 @@ bool out_of_memory(struct oom_control *oc)
> > >  		return true;
> > >  	}
> > >  
> > > +	/* The OOM killer does not compensate for IO-less reclaim. */
> > > +	if (!(oc->gfp_mask & __GFP_FS))
> > > +		return true;
> > > +

This patch will disable pagefault_out_of_memory() because currently
pagefault_out_of_memory() is passing oc->gfp_mask == 0.

Because of current behavior, calling oom notifiers from !__GFP_FS seems
to be safe.

> > >  	/*
> > >  	 * Check if there were limitations on the allocation (only relevant for
> > >  	 * NUMA) that may require different handling.
> > 
> > I don't object to this necessarily, but I think we need input from those 
> > that have taken the time to implement their own oom notifier to see if 
> > they agree.  In the past, they would only be called if reclaim has 
> > completely failed; now, they can be called in low memory situations when 
> > reclaim has had very little chance to be successful.  Getting an ack from 
> > them would be helpful.
> 
> I will make sure to put them on the CC and mention this in the changelog
> when I post this next time. I personally think that this shouldn't make
> much difference in the real life because GFP_NOFS only loads are rare

GFP_NOFS only loads are rare. But some GFP_KERNEL load which got TIF_MEMDIE
might be waiting for GFP_NOFS or GFP_NOIO loads to make progress.

I think we are not ready to handle situations where out_of_memory() is called
again after current thread got TIF_MEMDIE due to __GFP_NOFAIL allocation
request when we ran out of memory reserves. We should not assume that the
victim target thread does not have TIF_MEMDIE yet. I think we can handle it
by making mark_oom_victim() return a bool and return via shortcut only if
mark_oom_victim() successfully set TIF_MEMDIE. Though I don't like the
shortcut approach that lacks a guaranteed unlocking mechanism.

> and we should rather help by releasing memory when it is available
> rather than rely on something else to do it for us. Waiting for Godot is
> never a good strategy.
> 
> > I also think we have discussed this before, but I think the oom notifier 
> > handling should be in done in the page allocator proper, i.e. in 
> > __alloc_pages_may_oom().  We can leave out_of_memory() for a clear defined 
> > purpose: to kill a process when all reclaim has failed.
> 
> I vaguely remember there was some issue with that the last time we have
> discussed that. It was the duplication from the page fault and allocator
> paths AFAIR. Nothing that cannot be handled though but the OOM notifier
> API is just too ugly to spread outside OOM proper I guess. Why we cannot
> move those users to use proper shrinkers interface (after it gets
> extended by a priority of some sort and release some objects only after
> we are really in troubles)? Something for a separate discussion,
> though...

Calling oom notifiers from SysRq-f is what we want?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ