[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fb12d5bd-de74-a4da-8a38-db64cfb3e5d3@arm.com>
Date: Thu, 3 Aug 2023 11:27:43 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Yin Fengwei <fengwei.yin@...el.com>, Yu Zhao <yuzhao@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Yang Shi <shy828301@...il.com>,
"Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Itaru Kitayama <itaru.kitayama@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v4 2/5] mm: LARGE_ANON_FOLIO for improved performance
On 03/08/2023 10:58, Yin Fengwei wrote:
>
>
> On 8/3/23 17:32, Ryan Roberts wrote:
>> On 03/08/2023 09:37, Yin Fengwei wrote:
>>>
>>>
>>> On 8/3/23 16:21, Ryan Roberts wrote:
>>>> On 03/08/2023 09:05, Yin Fengwei wrote:
>>>>
>>>> ...
>>>>
>>>>>> I've captured run time and peak memory usage, and taken the mean. The stdev for
>>>>>> the peak memory usage is big-ish, but I'm confident this still captures the
>>>>>> central tendancy well:
>>>>>>
>>>>>> | MAX_ORDER_UNHINTED | real-time | kern-time | user-time | peak memory |
>>>>>> |:-------------------|------------:|------------:|------------:|:------------|
>>>>>> | 4k | 0.0% | 0.0% | 0.0% | 0.0% |
>>>>>> | 16k | -3.6% | -26.5% | -0.5% | -0.1% |
>>>>>> | 32k | -4.8% | -37.4% | -0.6% | -0.1% |
>>>>>> | 64k | -5.7% | -42.0% | -0.6% | -1.1% |
>>>>>> | 128k | -5.6% | -42.1% | -0.7% | 1.4% |
>>>>>> | 256k | -4.9% | -41.9% | -0.4% | 1.9% |
>>>>>
>>>>> Here is my test result:
>>>>>
>>>>> real user sys
>>>>> hink-4k: 0% 0% 0%
>>>>> hink-16K: -3% 0.1% -18.3%
>>>>> hink-32K: -4% 0.2% -27.2%
>>>>> hink-64K: -4% 0.5% -31.0%
>>>>> hink-128K: -4% 0.9% -33.7%
>>>>> hink-256K: -5% 1% -34.6%
>>>>>
>>>>>
>>>>> I used command:
>>>>> /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" make -skj96 allmodconfig all
>>>>> to build kernel and collect the real time/user time/kernel time.
>>>>> /sys/kernel/mm/transparent_hugepage/enabled is "madvise".
>>>>> Let me know if you have any question about the test.
>>>>
>>>> Thanks for doing this! I have a couple of questions:
>>>>
>>>> - how many times did you run each test?
>>> Three times for each ANON_FOLIO_MAX_ORDER_UNHINTED. The stddev is quite
>>> small like less than %1.
>>
>> And out of interest, were you running on bare metal or in VM? And did you reboot
>> between each run?
> I run the test on bare metal env. I didn't reboot for every run. But have to reboot
> for different ANON_FOLIO_MAX_ORDER_UNHINTED size. I do
> echo 3 > /proc/sys/vm/drop_caches
> for everything run after "make mrproper" even after a fresh boot.
>
>
>>
>>>>
>>>> - how did you configure the large page size? (I sent an email out yesterday
>>>> saying that I was doing it wrong from my tests, so the 128k and 256k results
>>>> for my test set are not valid.
>>> I changed the ANON_FOLIO_MAX_ORDER_UNHINTED definition manually every time.
>>
>> In that case, I think your results are broken in a similar way to mine. This
>> code means that order will never be higher than 3 (32K) on x86:
>>
>> + order = max(arch_wants_pte_order(), PAGE_ALLOC_COSTLY_ORDER);
>> +
>> + if (!hugepage_vma_check(vma, vma->vm_flags, false, true, true))
>> + order = min(order, ANON_FOLIO_MAX_ORDER_UNHINTED);
>>
>> On x86, arch_wants_pte_order() is not implemented and the default returns -1, so
>> you end up with:
> I added arch_waits_pte_order() for x86 and gave it a very large number. So the
> order is decided by ANON_FOLIO_MAX_ORDER_UNHINTED. I suppose my data is valid.
Ahh great! ok sorry for the noise.
Given part of the rationale for the experiment was to plot perf against memory
usage, did you collect any memory numbers?
>
>>
>> order = min(PAGE_ALLOC_COSTLY_ORDER, ANON_FOLIO_MAX_ORDER_UNHINTED)
>>
>> So your 4k, 16k and 32k results should be valid, but 64k, 128k and 256k results
>> are actually using 32k, I think? Which is odd because you are getting more
>> stddev than the < 1% you quoted above? So perhaps this is down to rebooting
>> (kaslr, or something...?)
>>
>> (on arm64, arch_wants_pte_order() returns 4, so my 64k result is also valid).
>>
>> As a quick hack to work around this, would you be able to change the code to this:
>>
>> + if (!hugepage_vma_check(vma, vma->vm_flags, false, true, true))
>> + order = ANON_FOLIO_MAX_ORDER_UNHINTED;
>>
>>>
>>>>
>>>> - what does "hink" mean??
>>> Sorry for the typo. It should be ANON_FOLIO_MAX_ORDER_UNHINTED.
>>>
>>>>
>>>>>
>>>>> I also find one strange behavior with this version. It's related with why
>>>>> I need to set the /sys/kernel/mm/transparent_hugepage/enabled to "madvise".
>>>>> If it's "never", the large folio is disabled either.
>>>>> If it's "always", the THP will be active before large folio. So the system is
>>>>> in the mixed mode. it's not suitable for this test.
>>>>
>>>> We had a discussion around this in the THP meeting yesterday. I'm going to write
>>>> this up propoerly so we can have proper systematic discussion. The tentative
>>>> conclusion is that MADV_NOHUGEPAGE must continue to mean "do not fault in more
>>>> than is absolutely necessary". I would assume we need to extend that thinking to
>>>> the process-wide and system-wide knobs (as is done in the patch), but we didn't
>>>> explicitly say so in the meeting.
>>> There are cases that THP is not appreciated because of the latency or memory
>>> consumption. For these cases, large folio may fill the gap as less latency and
>>> memory consumption.
>>>
>>>
>>> So if disabling THP means large folio can't be used, we loose the chance to
>>> benefit those cases with large folio.
>>
>> Yes, I appreciate that. But there are also real use cases that expect
>> MADV_NOHUGEPAGE means "do not fault more than is absolutely necessary" and the
>> use cases break if that's not obeyed (e.g. live migration w/ qemu). So I think
>> we need to be conservitive to start. These apps that are explicitly forbidding
>> THP today, should be updated in the long run to opt-in to large anon folios
>> using some as-yet undefined control.
> Fair enough.
>
>
> Regards
> Yin, Fengwei
>
>>
>>>
>>>
>>> Regards
>>> Yin, Fengwei
>>>
>>>>
>>>> My intention is that if you have requested THP and your vma is big enough for
>>>> PMD-size then you get that, else you fallback to large anon folios. And if you
>>>> have neither opted in nor out, then you get large anon folios.
>>>>
>>>> We talked about the idea of adding a new knob that let's you set the max order,
>>>> but that needs a lot more thought.
>>>>
>>>> Anyway, as I said, I'll write it up so we can all systematically discuss.
>>>>
>>>>>
>>>>> So if it's "never", large folio is disabled. But why "madvise" enables large
>>>>> folio unconditionly? Suppose it's only enabled for the VMA range which user
>>>>> madvise large folio (or THP)?
>>>>>
>>>>> Specific for the hink setting, my understand is that we can't choose it only
>>>>> by this testing. Other workloads may have different behavior with differnt
>>>>> hink setting.
>>>>>
>>>>>
>>>>> Regards
>>>>> Yin, Fengwei
>>>>>
>>>>
>>
Powered by blists - more mailing lists