lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zfq6XaACmN2JssTW@tiehlicka>
Date: Wed, 20 Mar 2024 11:28:45 +0100
From: Michal Hocko <mhocko@...e.com>
To: Pavel Tikhomirov <ptikhomirov@...tuozzo.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
	Roman Gushchin <roman.gushchin@...ux.dev>,
	Shakeel Butt <shakeel.butt@...ux.dev>,
	Muchun Song <muchun.song@...ux.dev>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Vladimir Davydov <vdavydov.dev@...il.com>, cgroups@...r.kernel.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org, kernel@...nvz.org
Subject: Re: [PATCH] mm/memcontrol: stop resize loop if limit was changed
 again

On Wed 20-03-24 18:03:30, Pavel Tikhomirov wrote:
> In memory_max_write() we first set memcg->memory.max and only then
> try to enforce it in loop. What if while we are in loop someone else
> have changed memcg->memory.max but we are still trying to enforce
> the old value? I believe this can lead to nasty consequence like getting
> an oom on perfectly fine cgroup within it's limits or excess reclaim.

I would argue that uncoordinated hard limit configuration can cause
problems on their own. Beside how is this any different from changing
the high limit while we are inside the reclaim loop?

> We also have exactly the same thing in memory_high_write().
> 
> So let's stop enforcing old limits if we already have a new ones.

I do see any reasons why this would be harmful I just do not see why
this is a real thing or why the new behavior is any better for racing
updaters as those are not deterministic anyway. If you have any actual
usecase then more details would really help to justify this change.

The existing behavior makes some sense as it enforces the given limit
deterministically.

> Signed-off-by: Pavel Tikhomirov <ptikhomirov@...tuozzo.com>
> ---
>  mm/memcontrol.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 61932c9215e7..81b303728491 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6769,6 +6769,9 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
>  		unsigned long nr_pages = page_counter_read(&memcg->memory);
>  		unsigned long reclaimed;
>  
> +		if (memcg->memory.high != high)
> +			break;
> +
>  		if (nr_pages <= high)
>  			break;
>  
> @@ -6817,6 +6820,9 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
>  	for (;;) {
>  		unsigned long nr_pages = page_counter_read(&memcg->memory);
>  
> +		if (memcg->memory.max != max)
> +			break;
> +
>  		if (nr_pages <= max)
>  			break;
>  
> -- 
> 2.43.0

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ