lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190530061221.GA6703@dhcp22.suse.cz>
Date:   Thu, 30 May 2019 08:12:21 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Chris Down <chris@...isdown.name>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Tejun Heo <tj@...nel.org>, Roman Gushchin <guro@...com>,
        Dennis Zhou <dennis@...nel.org>, linux-kernel@...r.kernel.org,
        cgroups@...r.kernel.org, linux-mm@...ck.org, kernel-team@...com
Subject: Re: [PATCH REBASED] mm, memcg: Make scan aggression always exclude
 protection

[Sorry for a late reply]

On Fri 22-03-19 16:03:07, Chris Down wrote:
[...]
> With this patch, memory.low and memory.min affect reclaim pressure in a
> more understandable and composable way. For example, from a user
> standpoint, "protected" memory now remains untouchable from a reclaim
> aggression standpoint, and users can also have more confidence that
> bursty workloads will still receive some amount of guaranteed
> protection.

Maybe I am missing something so correct me if I am wrong but the new
calculation actually means that we always allow to scan even min
protected memcgs right?

Because ...

[...]

> +static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg,
> +						  bool in_low_reclaim)
>  {
> -	if (mem_cgroup_disabled()) {
> -		*min = 0;
> -		*low = 0;
> -		return;
> -	}
> +	if (mem_cgroup_disabled())
> +		return 0;
> +
> +	if (in_low_reclaim)
> +		return READ_ONCE(memcg->memory.emin);
>  
> -	*min = READ_ONCE(memcg->memory.emin);
> -	*low = READ_ONCE(memcg->memory.elow);
> +	return max(READ_ONCE(memcg->memory.emin),
> +		   READ_ONCE(memcg->memory.elow));
>  }
[...]
> +			unsigned long cgroup_size = mem_cgroup_size(memcg);
> +
> +			/* Avoid TOCTOU with earlier protection check */
> +			cgroup_size = max(cgroup_size, protection);
> +
> +			scan = lruvec_size - lruvec_size * protection /
> +				cgroup_size;
>  
[...]
> -			scan = clamp(scan, SWAP_CLUSTER_MAX, lruvec_size);
> +			scan = max(scan, SWAP_CLUSTER_MAX);

here the zero or sub SWAP_CLUSTER_MAX scan target gets extended to
SWAP_CLUSTER_MAX. Unless I am missing something this is not correct
because min protection should be a guarantee even in in_low_reclaim
mode.

>  		} else {
>  			scan = lruvec_size;
>  		}
> -- 
> 2.21.0

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ