[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef2c9e13-cb38-4447-b595-f461f3f25432@linux.alibaba.com>
Date: Tue, 3 Jun 2025 16:32:35 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, david@...hat.com,
shakeel.butt@...ux.dev, lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com,
vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com, donettom@...ux.ibm.com,
aboorvad@...ux.ibm.com, sj@...nel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: fix the inaccurate memory statistics issue for users
On 2025/6/3 16:15, Michal Hocko wrote:
> On Tue 03-06-25 16:08:21, Baolin Wang wrote:
>>
>>
>> On 2025/5/30 21:39, Michal Hocko wrote:
>>> On Thu 29-05-25 20:53:13, Andrew Morton wrote:
>>>> On Sat, 24 May 2025 09:59:53 +0800 Baolin Wang <baolin.wang@...ux.alibaba.com> wrote:
>>>>
>>>>> On some large machines with a high number of CPUs running a 64K pagesize
>>>>> kernel, we found that the 'RES' field is always 0 displayed by the top
>>>>> command for some processes, which will cause a lot of confusion for users.
>>>>>
>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>> 875525 root 20 0 12480 0 0 R 0.3 0.0 0:00.08 top
>>>>> 1 root 20 0 172800 0 0 S 0.0 0.0 0:04.52 systemd
>>>>>
>>>>> The main reason is that the batch size of the percpu counter is quite large
>>>>> on these machines, caching a significant percpu value, since converting mm's
>>>>> rss stats into percpu_counter by commit f1a7941243c1 ("mm: convert mm's rss
>>>>> stats into percpu_counter"). Intuitively, the batch number should be optimized,
>>>>> but on some paths, performance may take precedence over statistical accuracy.
>>>>> Therefore, introducing a new interface to add the percpu statistical count
>>>>> and display it to users, which can remove the confusion. In addition, this
>>>>> change is not expected to be on a performance-critical path, so the modification
>>>>> should be acceptable.
>>>>>
>>>>> Fixes: f1a7941243c1 ("mm: convert mm's rss stats into percpu_counter")
>>>>
>>>> Three years ago.
>>>>
>>>>> Tested-by Donet Tom <donettom@...ux.ibm.com>
>>>>> Reviewed-by: Aboorva Devarajan <aboorvad@...ux.ibm.com>
>>>>> Tested-by: Aboorva Devarajan <aboorvad@...ux.ibm.com>
>>>>> Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>
>>>>> Acked-by: SeongJae Park <sj@...nel.org>
>>>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>>>
>>>> Thanks, I added cc:stable to this.
>>>
>>> I have only noticed this new posting now. I do not think this is a
>>> stable material. I am also not convinced that the impact of the pcp lock
>>> exposure to the userspace has been properly analyzed and documented in
>>> the changelog. I am not nacking the patch (yet) but I would like to see
>>> a serious analyses that this has been properly thought through.
>>
>> Good point. I did a quick measurement on my 32 cores Arm machine. I ran two
>> workloads, one is the 'top' command: top -d 1 (updating every second).
>> Another workload is kernel building (time make -j32).
>>
>> From the following data, I did not see any significant impact of the patch
>> changes on the execution of the kernel building workload.
>
> I do not think this is really representative of an adverse workload. I
> believe you need to have a look which potentially sensitive kernel code
> paths run with the lock held how would a busy loop over affected proc
> files influence those in the worst case. Maybe there are none of such
> kernel code paths to really worry about. This should be a part of the
> changelog though.
IMO, kernel code paths usually have batch caching to avoid lock
contention, so I think the impact on kernel code paths is not that
obvious. Therefore, I also think it's hard to find an adverse workload.
How about adding the following comments in the commit log?
"
I did a quick measurement on my 32 cores Arm machine. I ran two
workloads, one is the 'top' command: top -d 1 (updating every second).
Another workload is kernel building (time make -j32).
From the following data, I did not see any significant impact of the
patch changes on the execution of the kernel building workload. In
addition, kernel code paths usually have batch caching to avoid pcp lock
contention, so I think the impact on kernel code paths is not that
obvious even the pcp lock is exposed to the userspace.
w/o patch:
real 4m33.887s
user 118m24.153s
sys 9m51.402s
w/ patch:
real 4m34.495s
user 118m21.739s
sys 9m39.232s
"
Powered by blists - more mailing lists