[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zfv32aq7.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Wed, 13 Mar 2024 09:15:28 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand
<david@...hat.com>, Matthew Wilcox <willy@...radead.org>, Gao Xiang
<xiang@...nel.org>, Yu Zhao <yuzhao@...gle.com>, Yang Shi
<shy828301@...il.com>, Michal Hocko <mhocko@...e.com>, Kefeng Wang
<wangkefeng.wang@...wei.com>, Barry Song <21cnbao@...il.com>, Chris Li
<chrisl@...nel.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 0/6] Swap-out mTHP without splitting
Ryan Roberts <ryan.roberts@....com> writes:
> On 12/03/2024 08:49, Ryan Roberts wrote:
>> On 12/03/2024 08:01, Huang, Ying wrote:
>>> Ryan Roberts <ryan.roberts@....com> writes:
>>>
>>>> Hi All,
>>>>
>>>> This series adds support for swapping out multi-size THP (mTHP) without needing
>>>> to first split the large folio via split_huge_page_to_list_to_order(). It
>>>> closely follows the approach already used to swap-out PMD-sized THP.
>>>>
>>>> There are a couple of reasons for swapping out mTHP without splitting:
>>>>
>>>> - Performance: It is expensive to split a large folio and under extreme memory
>>>> pressure some workloads regressed performance when using 64K mTHP vs 4K
>>>> small folios because of this extra cost in the swap-out path. This series
>>>> not only eliminates the regression but makes it faster to swap out 64K mTHP
>>>> vs 4K small folios.
>>>>
>>>> - Memory fragmentation avoidance: If we can avoid splitting a large folio
>>>> memory is less likely to become fragmented, making it easier to re-allocate
>>>> a large folio in future.
>>>>
>>>> - Performance: Enables a separate series [4] to swap-in whole mTHPs, which
>>>> means we won't lose the TLB-efficiency benefits of mTHP once the memory has
>>>> been through a swap cycle.
>>>>
>>>> I've done what I thought was the smallest change possible, and as a result, this
>>>> approach is only employed when the swap is backed by a non-rotating block device
>>>> (just as PMD-sized THP is supported today). Discussion against the RFC concluded
>>>> that this is sufficient.
>>>>
>>>>
>>>> Performance Testing
>>>> ===================
>>>>
>>>> I've run some swap performance tests on Ampere Altra VM (arm64) with 8 CPUs. The
>>>> VM is set up with a 35G block ram device as the swap device and the test is run
>>>> from inside a memcg limited to 40G memory. I've then run `usemem` from
>>>> vm-scalability with 70 processes, each allocating and writing 1G of memory. I've
>>>> repeated everything 6 times and taken the mean performance improvement relative
>>>> to 4K page baseline:
>>>>
>>>> | alloc size | baseline | + this series |
>>>> | | v6.6-rc4+anonfolio | |
>>>> |:-----------|--------------------:|--------------------:|
>>>> | 4K Page | 0.0% | 1.4% |
>>>> | 64K THP | -14.6% | 44.2% |
>>>> | 2M THP | 87.4% | 97.7% |
>>>>
>>>> So with this change, the 64K swap performance goes from a 15% regression to a
>>>> 44% improvement. 4K and 2M swap improves slightly too.
>>>
>>> I don't understand why the performance of 2M THP improves. The swap
>>> entry allocation becomes a little slower. Can you provide some
>>> perf-profile to root cause it?
>>
>> I didn't post the stdev, which is quite large (~10%), so that may explain some
>> of it:
>>
>> | kernel | mean_rel | std_rel |
>> |:---------|-----------:|----------:|
>> | base-4K | 0.0% | 5.5% |
>> | base-64K | -14.6% | 3.8% |
>> | base-2M | 87.4% | 10.6% |
>> | v4-4K | 1.4% | 3.7% |
>> | v4-64K | 44.2% | 11.8% |
>> | v4-2M | 97.7% | 13.3% |
>>
>> Regardless, I'll do some perf profiling and post results shortly.
>
> I did a lot more runs (24 for each config) and meaned them to try to remove the
> noise in the measurements. It's now only showing a 4% improvement for 2M. So I
> don't think the 2M improvement is real:
>
> | kernel | mean_rel | std_rel |
> |:---------|-----------:|----------:|
> | base-4K | 0.0% | 3.2% |
> | base-64K | -9.1% | 10.1% |
> | base-2M | 88.9% | 6.8% |
> | v4-4K | 0.5% | 3.1% |
> | v4-64K | 44.7% | 8.3% |
> | v4-2M | 93.3% | 7.8% |
>
> Looking at the perf data, the only thing that sticks out is that a big chunk of
> time is spent in during contpte_convert(), called as a result of
> try_to_unmap_one(). This is present in both the before and after configs.
>
> This is an arm64 function to "unfold" contpte mappings. Essentially, the PMD is
> being split during shrink_folio_list() with TTU_SPLIT_HUGE_PMD, meaning the
> THPs are PTE-mapped in contpte blocks. Then we are unmapping each pte one-by-one
> which means the contpte block needs to be unfolded. I think try_to_unmap_one()
> could potentially be optimized to batch unmap a contiguously mapped folio and
> avoid this unfold. But that would be an independent and separate piece of work.
Thanks for more data and detailed explanation.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists