lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 May 2020 15:28:26 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Chris Down <chris@...isdown.name>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Tejun Heo <tj@...nel.org>, linux-mm@...ck.org,
        cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH] mm, memcg: reclaim more aggressively before high
 allocator throttling

On Thu 21-05-20 14:05:30, Chris Down wrote:
> Chris Down writes:
> > > I believe I have asked in other email in this thread. Could you explain
> > > why enforcint the requested target (memcg_nr_pages_over_high) is
> > > insufficient for the problem you are dealing with? Because that would
> > > make sense for large targets to me while it would keep relatively
> > > reasonable semantic of the throttling - aka proportional to the memory
> > > demand rather than the excess.
> > 
> > memcg_nr_pages_over_high is related to the charge size. As such, if
> > you're way over memory.high as a result of transient reclaim failures,
> > but the majority of your charges are small, it's going to hard to make
> > meaningful progress:
> > 
> > 1. Most nr_pages will be MEMCG_CHARGE_BATCH, which is not enough to help;
> > 2. Large allocations will only get a single reclaim attempt to succeed.
> > 
> > As such, in many cases we're either doomed to successfully reclaim a
> > paltry amount of pages, or fail to reclaim a lot of pages. Asking
> > try_to_free_pages() to deal with those huge allocations is generally not
> > reasonable, regardless of the specifics of why it doesn't work in this
> > case.
> 
> Oh, I somehow elided the "enforcing" part of your proposal. Still, there's
> no guarantee even if large allocations are reclaimed fully that we will end
> up going back below memory.high, because even a single other large
> allocation which fails to reclaim can knock us out of whack again.

Yeah, there is no guarantee and that is fine. Because memory.high is not
about guarantee. It is about a best effort and slowing down the
allocation pace so that the userspace has time to do something about
that.

That being said I would be really curious about how enforcing the
memcg_nr_pages_over_high target works in your setups where you see the
problem. If that doesn't work for some reason and the reclaim should be
more pro-active then I would suggest to scale it via memcg_nr_pages_over_high
rather than essentially keep it around and ignore it. Preserving at
least some form of fairness and predictable behavior is important IMHO
but if there is no way to achieve that then there should be a very good
explanation for that.

I hope that we it is more clear what is our thinking now. I will be FTO
for upcoming days trying to get some rest from email so my response time
will be longer. Will be back on Thursday.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ