[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b2876d37-a342-41f6-9613-dd4bfaa5841b@huawei.com>
Date: Thu, 7 Dec 2023 23:50:32 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Ryan Roberts <ryan.roberts@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
Yin Fengwei <fengwei.yin@...el.com>,
David Hildenbrand <david@...hat.com>,
Yu Zhao <yuzhao@...gle.com>,
Catalin Marinas <catalin.marinas@....com>,
Anshuman Khandual <anshuman.khandual@....com>,
Yang Shi <shy828301@...il.com>,
"Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Itaru Kitayama <itaru.kitayama@...il.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
John Hubbard <jhubbard@...dia.com>,
David Rientjes <rientjes@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Hugh Dickins <hughd@...gle.com>,
Barry Song <21cnbao@...il.com>,
Alistair Popple <apopple@...dia.com>
CC: <linux-mm@...ck.org>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v8 00/10] Multi-size THP for anonymous memory
On 2023/12/6 18:08, Ryan Roberts wrote:
> On 05/12/2023 14:19, Kefeng Wang wrote:
>>
>>
>> On 2023/12/4 18:20, Ryan Roberts wrote:
>>> Hi All,
>>>
>>> A new week, a new version, a new name... This is v8 of a series to implement
>>> multi-size THP (mTHP) for anonymous memory (previously called "small-sized THP"
>>> and "large anonymous folios"). Matthew objected to "small huge" so hopefully
>>> this fares better.
>>>
>>> The objective of this is to improve performance by allocating larger chunks of
>>> memory during anonymous page faults:
>>>
>>> 1) Since SW (the kernel) is dealing with larger chunks of memory than base
>>> pages, there are efficiency savings to be had; fewer page faults, batched PTE
>>> and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel
>>> overhead. This should benefit all architectures.
>>> 2) Since we are now mapping physically contiguous chunks of memory, we can take
>>> advantage of HW TLB compression techniques. A reduction in TLB pressure
>>> speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce
>>> TLB entries; "the contiguous bit" (architectural) and HPA (uarch).
>>>
>>> This version changes the name and tidies up some of the kernel code and test
>>> code, based on feedback against v7 (see change log for details).
>>>
>>> By default, the existing behaviour (and performance) is maintained. The user
>>> must explicitly enable multi-size THP to see the performance benefit. This is
>>> done via a new sysfs interface (as recommended by David Hildenbrand - thanks to
>>> David for the suggestion)! This interface is inspired by the existing
>>> per-hugepage-size sysfs interface used by hugetlb, provides full backwards
>>> compatibility with the existing PMD-size THP interface, and provides a base for
>>> future extensibility. See [8] for detailed discussion of the interface.
>>>
>>> This series is based on mm-unstable (715b67adf4c8).
>>>
>>>
>>> Prerequisites
>>> =============
>>>
>>> Some work items identified as being prerequisites are listed on page 3 at [9].
>>> The summary is:
>>>
>>> | item | status |
>>> |:------------------------------|:------------------------|
>>> | mlock | In mainline (v6.7) |
>>> | madvise | In mainline (v6.6) |
>>> | compaction | v1 posted [10] |
>>> | numa balancing | Investigated: see below |
>>> | user-triggered page migration | In mainline (v6.7) |
>>> | khugepaged collapse | In mainline (NOP) |
>>>
>>> On NUMA balancing, which currently ignores any PTE-mapped THPs it encounters,
>>> John Hubbard has investigated this and concluded that it is A) not clear at the
>>> moment what a better policy might be for PTE-mapped THP and B) questions whether
>>> this should really be considered a prerequisite given no regression is caused
>>> for the default "multi-size THP disabled" case, and there is no correctness
>>> issue when it is enabled - its just a potential for non-optimal performance.
>>>
>>> If there are no disagreements about removing numa balancing from the list (none
>>> were raised when I first posted this comment against v7), then that just leaves
>>> compaction which is in review on list at the moment.
>>>
>>> I really would like to get this series (and its remaining comapction
>>> prerequisite) in for v6.8. I accept that it may be a bit optimistic at this
>>> point, but lets see where we get to with review?
>>>
>>>
>>> Testing
>>> =======
>>>
>>> The series includes patches for mm selftests to enlighten the cow and khugepaged
>>> tests to explicitly test with multi-size THP, in the same way that PMD-sized
>>> THP is tested. The new tests all pass, and no regressions are observed in the mm
>>> selftest suite. I've also run my usual kernel compilation and java script
>>> benchmarks without any issues.
>>>
>>> Refer to my performance numbers posted with v6 [6]. (These are for multi-size
>>> THP only - they do not include the arm64 contpte follow-on series).
>>>
>>> John Hubbard at Nvidia has indicated dramatic 10x performance improvements for
>>> some workloads at [11]. (Observed using v6 of this series as well as the arm64
>>> contpte series).
>>>
>>> Kefeng Wang at Huawei has also indicated he sees improvements at [12] although
>>> there are some latency regressions also.
>>
>> Hi Ryan,
>>
>> Here is some test results based on v6.7-rc1 +
>> [PATCH v7 00/10] Small-sized THP for anonymous memory +
>> [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings
>>
>> case1: basepage 64K
>> case2: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 3
>> case3: basepage 4K + thp=64k + PAGE_ALLOC_COSTLY_ORDER = 4
>
> Thanks for sharing these results. With the exception of a few outliers, It looks
> like the ~rough conclusion is that bandwidth improves, but not as much as 64K
> base pages, and latency regresses, but also not as much as 64K base pages?
It depends on the test cases, both sides have their own advantages and
disadvantages, but 64k base page is still better in most cases.
>
> I expect that over time, as we add more optimizations, we will get bandwidth
> closer to 64K base pages; one crucial one is getting executable file-backed
> memory into contpte mappings, for example.
Yes, this should spend some time to optimize, also maybe provide more
policy, eg order chosen, per-task/per-cg control?
>
> It's probably not time to switch PAGE_ALLOC_COSTLY_ORDER quite yet; but
> something to keep an eye on and consider down the road?
This one just for test and it seems not to obtain large gain in
unixbench/lmbench testcases, also it shouldn't be considered in this
patchset.
Powered by blists - more mailing lists