lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7307bb7a-7c45-43f7-b073-acd9e1389000@linux.alibaba.com>
Date: Tue, 3 Jun 2025 22:22:46 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, david@...hat.com,
 shakeel.butt@...ux.dev, lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com,
 vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com, donettom@...ux.ibm.com,
 aboorvad@...ux.ibm.com, sj@...nel.org, linux-mm@...ck.org,
 linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: fix the inaccurate memory statistics issue for users



On 2025/6/3 18:28, Michal Hocko wrote:
> On Tue 03-06-25 16:32:35, Baolin Wang wrote:
>>
>>
>> On 2025/6/3 16:15, Michal Hocko wrote:
>>> On Tue 03-06-25 16:08:21, Baolin Wang wrote:
>>>>
>>>>
>>>> On 2025/5/30 21:39, Michal Hocko wrote:
>>>>> On Thu 29-05-25 20:53:13, Andrew Morton wrote:
>>>>>> On Sat, 24 May 2025 09:59:53 +0800 Baolin Wang <baolin.wang@...ux.alibaba.com> wrote:
>>>>>>
>>>>>>> On some large machines with a high number of CPUs running a 64K pagesize
>>>>>>> kernel, we found that the 'RES' field is always 0 displayed by the top
>>>>>>> command for some processes, which will cause a lot of confusion for users.
>>>>>>>
>>>>>>>        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
>>>>>>>     875525 root      20   0   12480      0      0 R   0.3   0.0   0:00.08 top
>>>>>>>          1 root      20   0  172800      0      0 S   0.0   0.0   0:04.52 systemd
>>>>>>>
>>>>>>> The main reason is that the batch size of the percpu counter is quite large
>>>>>>> on these machines, caching a significant percpu value, since converting mm's
>>>>>>> rss stats into percpu_counter by commit f1a7941243c1 ("mm: convert mm's rss
>>>>>>> stats into percpu_counter"). Intuitively, the batch number should be optimized,
>>>>>>> but on some paths, performance may take precedence over statistical accuracy.
>>>>>>> Therefore, introducing a new interface to add the percpu statistical count
>>>>>>> and display it to users, which can remove the confusion. In addition, this
>>>>>>> change is not expected to be on a performance-critical path, so the modification
>>>>>>> should be acceptable.
>>>>>>>
>>>>>>> Fixes: f1a7941243c1 ("mm: convert mm's rss stats into percpu_counter")
>>>>>>
>>>>>> Three years ago.
>>>>>>
>>>>>>> Tested-by Donet Tom <donettom@...ux.ibm.com>
>>>>>>> Reviewed-by: Aboorva Devarajan <aboorvad@...ux.ibm.com>
>>>>>>> Tested-by: Aboorva Devarajan <aboorvad@...ux.ibm.com>
>>>>>>> Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>
>>>>>>> Acked-by: SeongJae Park <sj@...nel.org>
>>>>>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>>>>>
>>>>>> Thanks, I added cc:stable to this.
>>>>>
>>>>> I have only noticed this new posting now. I do not think this is a
>>>>> stable material. I am also not convinced that the impact of the pcp lock
>>>>> exposure to the userspace has been properly analyzed and documented in
>>>>> the changelog. I am not nacking the patch (yet) but I would like to see
>>>>> a serious analyses that this has been properly thought through.
>>>>
>>>> Good point. I did a quick measurement on my 32 cores Arm machine. I ran two
>>>> workloads, one is the 'top' command: top -d 1 (updating every second).
>>>> Another workload is kernel building (time make -j32).
>>>>
>>>>   From the following data, I did not see any significant impact of the patch
>>>> changes on the execution of the kernel building workload.
>>>
>>> I do not think this is really representative of an adverse workload. I
>>> believe you need to have a look which potentially sensitive kernel code
>>> paths run with the lock held how would a busy loop over affected proc
>>> files influence those in the worst case. Maybe there are none of such
>>> kernel code paths to really worry about. This should be a part of the
>>> changelog though.
>>
>> IMO, kernel code paths usually have batch caching to avoid lock contention,
>> so I think the impact on kernel code paths is not that obvious.
> 
> This is a very generic statement. Does this refer to the existing pcp
> locking usage in the kernel? Have you evaluated existing users?

Let me try to clarify further.

The 'mm->rss_stat' is updated by using add_mm_counter(), 
dec/inc_mm_counter(), which are all wrappers around 
percpu_counter_add_batch(). In percpu_counter_add_batch(), there is 
percpu batch caching to avoid 'fbc->lock' contention. This patch changes 
task_mem() and task_statm() to get the accurate mm counters under the 
'fbc->lock', but this will not exacerbate kernel 'mm->rss_stat' lock 
contention due to the the percpu batch caching of the mm counters.

You might argue that my test cases cannot demonstrate an actual lock 
contention, but they have already shown that there is no significant 
'fbc->lock' contention when the kernel updates 'mm->rss_stat'.

>> Therefore, I
>> also think it's hard to find an adverse workload.
>>
>> How about adding the following comments in the commit log?
>> "
>> I did a quick measurement on my 32 cores Arm machine. I ran two workloads,
>> one is the 'top' command: top -d 1 (updating every second). Another workload
>> is kernel building (time make -j32).
> 
> This test doesn't really do much to trigger an actual lock contention as
> already mentioned.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ