lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200429090045.GW28637@dhcp22.suse.cz>
Date:   Wed, 29 Apr 2020 11:00:45 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     peter enderborg <peter.enderborg@...y.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        David Rientjes <rientjes@...gle.com>,
        Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [patch] mm, oom: stop reclaiming if GFP_ATOMIC will start
 failing soon

On Wed 29-04-20 10:31:41, peter enderborg wrote:
> On 4/28/20 9:43 AM, Michal Hocko wrote:
> > On Mon 27-04-20 16:35:58, Andrew Morton wrote:
> > [...]
> >> No consumer of GFP_ATOMIC memory should consume an unbounded amount of
> >> it.
> >> Subsystems such as networking will consume a certain amount and
> >> will then start recycling it.  The total amount in-flight will vary
> >> over the longer term as workloads change.  A dynamically tuning
> >> threshold system will need to adapt rapidly enough to sudden load
> >> shifts, which might require unreasonable amounts of headroom.
> > I do agree. __GFP_HIGH/__GFP_ATOMIC are bound by the size of the
> > reserves under memory pressure. Then allocatios start failing very
> > quickly and users have to cope with that, usually by deferring to a
> > sleepable context. Tuning reserves dynamically for heavy reserves
> > consumers would be possible but I am worried that this is far from
> > trivial.
> >
> > We definitely need to understand what is going on here.  Why doesn't
> > kswapd + N*direct reclaimers do not provide enough memory to satisfy
> > both N threads + reserves consumers? How many times those direct
> > reclaimers have to retry?
> 
> Was this not supposed to be avoided with PSI, user-space should
> a fair change to take actions before it goes bad in user-space?

Yes, PSI is certainly a tool to help userspace make actions on heavy
reclaim. And I agree that if there is a desire to trigger the oom killer
early as David states elsewhere in the thread then this approach should
be considered.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ