lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 31 Mar 2020 17:57:52 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Chris Down <chris@...isdown.name>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Jakub Kicinski <kuba@...nel.org>, linux-mm@...ck.org,
        cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH] mm, memcg: Do not high throttle allocators based on
 wraparound

On Tue 31-03-20 16:24:24, Chris Down wrote:
> From: Jakub Kicinski <kuba@...nel.org>
> 
> If a cgroup violates its memory.high constraints, we may end
> up unduly penalising it. For example, for the following hierarchy:
> 
> A:   max high, 20 usage
> A/B: 9 high, 10 usage
> A/C: max high, 10 usage
> 
> We would end up doing the following calculation below when calculating
> high delay for A/B:
> 
> A/B: 10 - 9 = 1...
> A:   20 - PAGE_COUNTER_MAX = 21, so set max_overage to 21.
> 
> This gets worse with higher disparities in usage in the parent.
> 
> I have no idea how this disappeared from the final version of the patch,
> but it is certainly Not Good(tm). This wasn't obvious in testing
> because, for a simple cgroup hierarchy with only one child, the result
> is usually roughly the same. It's only in more complex hierarchies that
> things go really awry (although still, the effects are limited to a
> maximum of 2 seconds in schedule_timeout_killable at a maximum).

I find this paragraph rather confusing. This is essentially an unsigned
underflow when any of the memcg up the hierarchy is below the high
limit, right?  There doesn't really seem anything complex in such a
hierarchy.

> [chris@...isdown.name: changelog]
> 
> Fixes: e26733e0d0ec ("mm, memcg: throttle allocators based on ancestral memory.high")
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> Signed-off-by: Chris Down <chris@...isdown.name>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: stable@...r.kernel.org # 5.4.x

To the patch
Acked-by: Michal Hocko <mhocko@...e.com>

> ---
>  mm/memcontrol.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index eecf003b0c56..75a978307863 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2336,6 +2336,9 @@ static unsigned long calculate_high_delay(struct mem_cgroup *memcg,
>  		usage = page_counter_read(&memcg->memory);
>  		high = READ_ONCE(memcg->high);
>  
> +		if (usage <= high)
> +			continue;
> +
>  		/*
>  		 * Prevent division by 0 in overage calculation by acting as if
>  		 * it was a threshold of 1 page
> -- 
> 2.26.0
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ