[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb7e85f3-90d5-428c-a93a-7e54ade1479c@linux.alibaba.com>
Date: Fri, 23 May 2025 13:47:28 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Donet Tom <donettom@...ux.ibm.com>, akpm@...ux-foundation.org,
david@...hat.com, shakeelb@...gle.com
Cc: lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
rppt@...nel.org, surenb@...gle.com, mhocko@...e.com, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] mm: fix the inaccurate memory statistics issue for
users
On 2025/5/23 13:25, Donet Tom wrote:
>
> On 5/23/25 8:46 AM, Baolin Wang wrote:
>> On some large machines with a high number of CPUs running a 64K kernel,
>> we found that the 'RES' field is always 0 displayed by the top command
>> for some processes, which will cause a lot of confusion for users.
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM
>> TIME+ COMMAND
>> 875525 root 20 0 12480 0 0 R 0.3 0.0
>> 0:00.08 top
>> 1 root 20 0 172800 0 0 S 0.0 0.0
>> 0:04.52 systemd
>>
>> The main reason is that the batch size of the percpu counter is quite
>> large
>> on these machines, caching a significant percpu value, since
>> converting mm's
>> rss stats into percpu_counter by commit f1a7941243c1 ("mm: convert
>> mm's rss
>> stats into percpu_counter"). Intuitively, the batch number should be
>> optimized,
>> but on some paths, performance may take precedence over statistical
>> accuracy.
>> Therefore, introducing a new interface to add the percpu statistical
>> count
>> and display it to users, which can remove the confusion. In addition,
>> this
>> change is not expected to be on a performance-critical path, so the
>> modification
>> should be acceptable.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>> ---
>> fs/proc/task_mmu.c | 14 +++++++-------
>> include/linux/mm.h | 5 +++++
>> 2 files changed, 12 insertions(+), 7 deletions(-)
>>
>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>> index b9e4fbbdf6e6..f629e6526935 100644
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -36,9 +36,9 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
>> unsigned long text, lib, swap, anon, file, shmem;
>> unsigned long hiwater_vm, total_vm, hiwater_rss, total_rss;
>> - anon = get_mm_counter(mm, MM_ANONPAGES);
>> - file = get_mm_counter(mm, MM_FILEPAGES);
>> - shmem = get_mm_counter(mm, MM_SHMEMPAGES);
>> + anon = get_mm_counter_sum(mm, MM_ANONPAGES);
>
>
> Hi Baolin Wang,
>
> We also observed the same issue where the RSS value in /proc/PID/status
> was 0 on machines with a high number of CPUs. With this patch, the issue
> got fixedl
Yes, we also observed this issue.
> Rss value without this patch
> ----------------------------
> # cat /proc/87406/status
> .....
> VmRSS: 0 kB
> RssAnon: 0 kB
> RssFile: 0 k
>
>
> Rss values with this patch
> --------------------------
> # cat /proc/3055/status
> VmRSS: 2176 kB
> RssAnon: 512 kB
> RssFile: 1664 kB
> RssShmem: 0 kB
>
> Tested-by Donet Tom <donettom@...ux.ibm.com>
Thanks for testing.
Powered by blists - more mailing lists