lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 8 Dec 2017 23:03:23 +0900
From:   Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:     mhocko@...nel.org
Cc:     surenb@...gle.com, akpm@...ux-foundation.org, hannes@...xchg.org,
        hillf.zj@...baba-inc.com, minchan@...nel.org,
        mgorman@...hsingularity.net, ying.huang@...el.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        timmurray@...gle.com, tkjos@...gle.com
Subject: Re: [PATCH v2] mm: terminate shrink_slab loop if signal is pending

Michal Hocko wrote:
> On Fri 08-12-17 20:36:16, Tetsuo Handa wrote:
> > On 2017/12/08 17:22, Michal Hocko wrote:
> > > On Thu 07-12-17 17:23:05, Suren Baghdasaryan wrote:
> > >> Slab shrinkers can be quite time consuming and when signal
> > >> is pending they can delay handling of the signal. If fatal
> > >> signal is pending there is no point in shrinking that process
> > >> since it will be killed anyway.
> > > 
> > > The thing is that we are _not_ shrinking _that_ process. We are
> > > shrinking globally shared objects and the fact that the memory pressure
> > > is so large that the kswapd doesn't keep pace with it means that we have
> > > to throttle all allocation sites by doing this direct reclaim. I agree
> > > that expediting killed task is a good thing in general because such a
> > > process should free at least some memory.
> > 
> > But doesn't doing direct reclaim mean that allocation request of already
> > fatal_signal_pending() threads will not succeed unless some memory is
> > reclaimed (or selected as an OOM victim)? Won't it just spin the "too
> > small to fail" retry loop at full speed in the worst case?
> 
> Well, normally kswapd would do the work on the background. But this
> would have to be carefully evaluated. That is why I've said "expedite"
> rather than skip.

Relying on kswapd is a bad assumption, for kswapd can be blocked on e.g. fs
locks waiting for somebody else to reclaim memory.

>  
> > >> This change checks for pending
> > >> fatal signals inside shrink_slab loop and if one is detected
> > >> terminates this loop early.
> > > 
> > > This changelog doesn't really address my previous review feedback, I am
> > > afraid. You should mention more details about problems you are seeing
> > > and what causes them. If we have a shrinker which takes considerable
> > > amount of time them we should be addressing that. If that is not
> > > possible then it should be documented at least.
> > 
> > Unfortunately, it is possible to be get blocked inside shrink_slab() for so long
> > like an example from http://lkml.kernel.org/r/1512705038.7843.6.camel@gmail.com .
> 
> As I've said any excessive shrinker should definitely be evaluated.

The cause of stall inside shrink_slab() can be memory pressure itself.
There would be no problem if kswapd is sufficient (i.e. direct reclaim is
not needed). But there are many problems if direct reclaim is needed.



I agree that making waits/loops killable is generally good. But be sure to be
prepared for the worst case. For example, start __GFP_KILLABLE from "best effort"
basis (i.e. no guarantee that the allocating thread will leave the page allocator
slowpath immediately) and check for fatal_signal_pending() only if
__GFP_KILLABLE is set. That is,

+		/*
+		 * We are about to die and free our memory.
+		 * Stop shrinking which might delay signal handling.
+		 */
+		if (unlikely((gfp_mask & __GFP_KILLABLE) && fatal_signal_pending(current)))
+			break;

at shrink_slab() etc. and

+		if ((gfp_mask & __GFP_KILLABLE) && fatal_signal_pending(current))
+			goto nopage;

at __alloc_pages_slowpath().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ