lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <66201b4a-44a5-4221-810a-897699425195@huaweicloud.com>
Date: Thu, 11 Dec 2025 08:43:29 +0800
From: Chen Ridong <chenridong@...weicloud.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: mhocko@...nel.org, roman.gushchin@...ux.dev, shakeel.butt@...ux.dev,
 muchun.song@...ux.dev, akpm@...ux-foundation.org, axelrasmussen@...gle.com,
 yuanchu@...gle.com, weixugc@...gle.com, david@...nel.org,
 zhengqi.arch@...edance.com, lorenzo.stoakes@...cle.com,
 cgroups@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 lujialin4@...wei.com
Subject: Re: [PATCH -next v2 2/2] memcg: remove mem_cgroup_size()



On 2025/12/11 0:36, Johannes Weiner wrote:
> On Wed, Dec 10, 2025 at 07:11:42AM +0000, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@...wei.com>
>>
>> The mem_cgroup_size helper is used only in apply_proportional_protection
>> to read the current memory usage. Its semantics are unclear and
>> inconsistent with other sites, which directly call page_counter_read for
>> the same purpose.
>>
>> Remove this helper and replace its usage with page_counter_read for
>> clarity. Additionally, rename the local variable 'cgroup_size' to 'usage'
>> to better reflect its meaning.
> 
> +1
> 
> I don't think the helper adds much.
> 
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -2451,6 +2451,7 @@ static inline void calculate_pressure_balance(struct scan_control *sc,
>>  static unsigned long apply_proportional_protection(struct mem_cgroup *memcg,
>>  		struct scan_control *sc, unsigned long scan)
>>  {
>> +#ifdef CONFIG_MEMCG
>>  	unsigned long min, low;
>>  
>>  	mem_cgroup_protection(sc->target_mem_cgroup, memcg, &min, &low);
>> @@ -2485,7 +2486,7 @@ static unsigned long apply_proportional_protection(struct mem_cgroup *memcg,
>>  		 * again by how much of the total memory used is under
>>  		 * hard protection.
>>  		 */
>> -		unsigned long cgroup_size = mem_cgroup_size(memcg);
>> +		unsigned long usage = page_counter_read(&memcg->memory);
>>  		unsigned long protection;
>>  
>>  		/* memory.low scaling, make sure we retry before OOM */
>> @@ -2497,9 +2498,9 @@ static unsigned long apply_proportional_protection(struct mem_cgroup *memcg,
>>  		}
>>  
>>  		/* Avoid TOCTOU with earlier protection check */
>> -		cgroup_size = max(cgroup_size, protection);
>> +		usage = max(usage, protection);
>>  
>> -		scan -= scan * protection / (cgroup_size + 1);
>> +		scan -= scan * protection / (usage + 1);
>>  
>>  		/*
>>  		 * Minimally target SWAP_CLUSTER_MAX pages to keep
>> @@ -2508,6 +2509,7 @@ static unsigned long apply_proportional_protection(struct mem_cgroup *memcg,
>>  		 */
>>  		scan = max(scan, SWAP_CLUSTER_MAX);
>>  	}
>> +#endif
> 
> To avoid the ifdef, how about making it
> 
> 	bool mem_cgroup_protection(root, memcg, &min, &low, &usage)
> 
> and branch the scaling on that return value. The compiler should be
> able to eliminate the entire branch in the !CONFIG_MEMCG case. And it
> keeps a cleaner split between memcg logic and reclaim logic.

Much better, will update.

-- 
Best regards,
Ridong


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ