[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b15205cd-33e3-6cac-b6a4-65266be7a9c8@suse.cz>
Date: Tue, 29 Jan 2019 16:52:21 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>,
Sandeep Patil <sspatil@...roid.com>
Cc: adobriyan@...il.com, avagin@...nvz.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org,
kernel-team@...roid.com, dancol@...gle.com
Subject: Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation
On 1/29/19 1:15 AM, Andrew Morton wrote:
> On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <sspatil@...roid.com> wrote:
>
>> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
>> as it accumulated the current pss everytime a locked VMA was found.
>>
>> Fix that by making sure we record the current pss value before each VMA
>> is walked. So, we can only add the delta if the VMA was found to be
>> VM_LOCKED.
>>
>> ...
>>
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>> #endif
>> .mm = vma->vm_mm,
>> };
>> + unsigned long pss;
>>
>> smaps_walk.private = mss;
>>
>> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>> }
>> }
>> #endif
>> -
>> + /* record current pss so we can calculate the delta after page walk */
>> + pss = mss->pss;
>> /* mmap_sem is held in m_start */
>> walk_page_vma(vma, &smaps_walk);
>> if (vma->vm_flags & VM_LOCKED)
>> - mss->pss_locked += mss->pss;
>> + mss->pss_locked += mss->pss - pss;
>> }
>
> This seems to be a rather obscure way of accumulating
> mem_size_stats.pss_locked. Wouldn't it make more sense to do this in
> smaps_account(), wherever we increment mem_size_stats.pss?
>
> It would be a tiny bit less efficient but I think that the code cleanup
> justifies such a cost?
Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it
there? We probably don't need the whole vma param yet.
Thanks.
Powered by blists - more mailing lists