lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 2 Nov 2017 00:37:08 +0900
From:   Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:     mhocko@...nel.org
Cc:     aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, rientjes@...gle.com,
        hannes@...xchg.org, mjaggi@...iumnetworks.com, mgorman@...e.de,
        oleg@...hat.com, vdavydov.dev@...il.com, vbabka@...e.cz
Subject: Re: [PATCH] mm,oom: Try last second allocation before and after selecting an OOM victim.

Michal Hocko wrote:
> > Does "that comment" refer to
> > 
> >   Elaborating the comment: the reason for the high wmark is to reduce
> >   the likelihood of livelocks and be sure to invoke the OOM killer, if
> >   we're still under pressure and reclaim just failed. The high wmark is
> >   used to be sure the failure of reclaim isn't going to be ignored. If
> >   using the min wmark like you propose there's risk of livelock or
> >   anyway of delayed OOM killer invocation.
> > 
> > part? Then, I know it is not about gfp flags.
> > 
> > But how can OOM livelock happen when the last second allocation does not
> > wait for memory reclaim (because __GFP_DIRECT_RECLAIM is masked) ?
> > The last second allocation shall return immediately, and we will call
> > out_of_memory() if the last second allocation failed.
> 
> I think Andrea just wanted to say that we do want to invoke OOM killer
> and resolve the memory pressure rather than keep looping in the
> reclaim/oom path just because there are few pages allocated and freed in
> the meantime.

I see. Then, that motivation no longer applies to current code, except

> 
> [...]
> > > I am not sure such a scenario matters all that much because it assumes
> > > that the oom victim doesn't really free much memory [1] (basically less than
> > > HIGH-MIN). Most OOM situation simply have a memory hog consuming
> > > significant amount of memory.
> > 
> > The OOM killer does not always kill a memory hog consuming significant amount
> > of memory. The OOM killer kills a process with highest OOM score (and instead
> > one of its children if any). I don't think that assuming an OOM victim will free
> > memory enough to succeed ALLOC_WMARK_HIGH is appropriate.
> 
> OK, so let's agree to disagree. I claim that we shouldn't care all that
> much. If any of the current heuristics turns out to lead to killing too
> many tasks then we should simply remove it rather than keep bloating an
> already complex code with more and more kluges.

using ALLOC_WMARK_HIGH might cause more OOM-killing than ALLOC_WMARK_MIN.

Thanks for clarification.

Powered by blists - more mailing lists