lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20181205231110.GA11330@castle.DHCP.thefacebook.com>
Date:   Wed, 5 Dec 2018 23:11:16 +0000
From:   Roman Gushchin <guro@...com>
To:     Xunlei Pang <xlpang@...ux.alibaba.com>
CC:     Michal Hocko <mhocko@...e.com>,
        Johannes Weiner <hannes@...xchg.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH 1/3] mm/memcg: Fix min/low usage in
 propagate_protected_usage()

On Wed, Dec 05, 2018 at 04:58:31PM +0800, Xunlei Pang wrote:
> Hi Roman,
> 
> On 2018/12/4 AM 2:00, Roman Gushchin wrote:
> > On Mon, Dec 03, 2018 at 04:01:17PM +0800, Xunlei Pang wrote:
> >> When usage exceeds min, min usage should be min other than 0.
> >> Apply the same for low.
> >>
> >> Signed-off-by: Xunlei Pang <xlpang@...ux.alibaba.com>
> >> ---
> >>  mm/page_counter.c | 12 ++----------
> >>  1 file changed, 2 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/mm/page_counter.c b/mm/page_counter.c
> >> index de31470655f6..75d53f15f040 100644
> >> --- a/mm/page_counter.c
> >> +++ b/mm/page_counter.c
> >> @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c,
> >>  		return;
> >>  
> >>  	if (c->min || atomic_long_read(&c->min_usage)) {
> >> -		if (usage <= c->min)
> >> -			protected = usage;
> >> -		else
> >> -			protected = 0;
> >> -
> >> +		protected = min(usage, c->min);
> > 
> > This change makes sense in the combination with the patch 3, but not as a
> > standlone "fix". It's not a bug, it's a required thing unless you start scanning
> > proportionally to memory.low/min excess.
> > 
> > Please, reflect this in the commit message. Or, even better, merge it into
> > the patch 3.
> 
> The more I looked the more I think it's a bug, but anyway I'm fine with
> merging it into patch 3 :-)

It's not. I've explained it back to the time when we've been discussing that
patch. TL;DR because the decision to scan or to skip is binary now, to
prioritize one cgroup over other it's necessary to do this trick. Otherwise
both cgroups can have their usages above effective memory protections, and
will be scanned with the same pace.

If you have any doubts, you can try to run memcg kselftests with and without
this change, you'll see the difference.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ