[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f002188e-8990-4c72-ad84-966518279dce@redhat.com>
Date: Wed, 4 Dec 2024 15:37:22 +0100
From: David Hildenbrand <david@...hat.com>
To: Wenchao Hao <haowenchao22@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, Oscar Salvador <osalvador@...e.de>,
Muhammad Usama Anjum <usama.anjum@...labora.com>,
Andrii Nakryiko <andrii@...nel.org>, Ryan Roberts <ryan.roberts@....com>,
Peter Xu <peterx@...hat.com>, Barry Song <21cnbao@...il.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH] smaps: count large pages smaller than PMD size to
anonymous_thp
On 04.12.24 15:30, Wenchao Hao wrote:
> On 2024/12/3 22:17, David Hildenbrand wrote:
>> On 03.12.24 14:49, Wenchao Hao wrote:
>>> Currently, /proc/xxx/smaps reports the size of anonymous huge pages for
>>> each VMA, but it does not include large pages smaller than PMD size.
>>>
>>> This patch adds the statistics of anonymous huge pages allocated by
>>> mTHP which is smaller than PMD size to AnonHugePages field in smaps.
>>>
>>> Signed-off-by: Wenchao Hao <haowenchao22@...il.com>
>>> ---
>>> fs/proc/task_mmu.c | 6 ++++++
>>> 1 file changed, 6 insertions(+)
>>>
>>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>>> index 38a5a3e9cba2..b655011627d8 100644
>>> --- a/fs/proc/task_mmu.c
>>> +++ b/vim
>>> @@ -717,6 +717,12 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
>>> if (!folio_test_swapbacked(folio) && !dirty &&
>>> !folio_test_dirty(folio))
>>> mss->lazyfree += size;
>>> +
>>> + /*
>>> + * Count large pages smaller than PMD size to anonymous_thp
>>> + */
>>> + if (!compound && PageHead(page) && folio_order(folio))
>>> + mss->anonymous_thp += folio_size(folio);
>>> }
>>> if (folio_test_ksm(folio))
>>
>>
>> I think we decided to leave this (and /proc/meminfo) be one of the last
>> interfaces where this is only concerned with PMD-sized ones:
>>
>
> Could you explain why?
>
> When analyzing the impact of mTHP on performance, we need to understand
> how many pages in the process are actually present as large pages.
> By comparing this value with the actual memory usage of the process,
> we can analyze the large page allocation success rate of the process,
> and further investigate the situation of khugepaged. If the actual
> proportion of large pages is low, the performance of the process may
> be affected, which could be directly reflected in the high number of
> TLB misses and page faults.
>
> However, currently, only PMD-sized large pages are being counted,
> which is insufficient.
As Ryan said, we have scripts to analyze that. We did not come to a
conclusion yet how to handle smaps stats differently -- and whether we
want to at all.
>
>> Documentation/admin-guide/mm/transhuge.rst:
>>
>> The number of PMD-sized anonymous transparent huge pages currently used by the
>> system is available by reading the AnonHugePages field in ``/proc/meminfo``.
>> To identify what applications are using PMD-sized anonymous transparent huge
>> pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
>> fields for each mapping. (Note that AnonHugePages only applies to traditional
>> PMD-sized THP for historical reasons and should have been called
>> AnonHugePmdMapped).
>>
>
> Maybe rename this field, then AnonHugePages contains huge page of mTHP?
It has the potential of breaking existing user space, which is why we
didn't look into that yet.
AnonHugePmdMapped would be a lot cleaner, and could be added
independently. It would be required as a first step.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists