[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8f8023cb-3c31-4ead-a9e6-03a10e9490c6@redhat.com>
Date: Wed, 20 Dec 2023 11:16:46 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>, Marc Zyngier <maz@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>, James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>, Zenghui Yu
<yuzenghui@...wei.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Matthew Wilcox <willy@...radead.org>, Yu Zhao <yuzhao@...gle.com>,
Mark Rutland <mark.rutland@....com>, Kefeng Wang
<wangkefeng.wang@...wei.com>, John Hubbard <jhubbard@...dia.com>,
Zi Yan <ziy@...dia.com>, Barry Song <21cnbao@...il.com>,
Alistair Popple <apopple@...dia.com>, Yang Shi <shy828301@...il.com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork()
On 20.12.23 11:11, Ryan Roberts wrote:
> On 20/12/2023 09:54, David Hildenbrand wrote:
>> On 20.12.23 10:51, Ryan Roberts wrote:
>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory,
>>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>
>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>> fork, as it is about to add transparent support for the "contiguous bit"
>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>> range in the first place.
>>>>>>>>>
>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>
>>>>>>>>> This change addresses the core-mm refactoring only and a separate change
>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>
>>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>
>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>> tight loop in a process with 1G of populated memory and the time for the
>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>>> positive is slower, compared to baseline upon which the series is based:
>>>>>>>>>
>>>>>>>>> | Apple M2 VM | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>> | fork |-------------------|-------------------|
>>>>>>>>> | microbench | mean | stdev | mean | stdev |
>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>> | baseline | 0.0% | 1.1% | 0.0% | 1.2% |
>>>>>>>>> | after-change | -1.0% | 2.0% | -0.1% | 1.1% |
>>>>>>>>>
>>>>>>>>> | Ampere Altra | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>> | fork |-------------------|-------------------|
>>>>>>>>> | microbench | mean | stdev | mean | stdev |
>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>> | baseline | 0.0% | 1.0% | 0.0% | 0.1% |
>>>>>>>>> | after-change | -0.1% | 1.2% | -0.1% | 0.1% |
>>>>>>>>>
>>>>>>>>> Tested-by: John Hubbard <jhubbard@...dia.com>
>>>>>>>>> Reviewed-by: Alistair Popple <apopple@...dia.com>
>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>>>>>>>>> ---
>>>>>>>>> include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>> mm/memory.c | 92 ++++++++++++++++++++++++++---------------
>>>>>>>>> 2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>> #define arch_flush_lazy_mmu_mode() do {} while (0)
>>>>>>>>> #endif
>>>>>>>>> +#ifndef pte_batch_remaining
>>>>>>>>> +/**
>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>> + *
>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of
>>>>>>>>> ptes.
>>>>>>>>> + * In such cases, this function returns the remaining number of pages to
>>>>>>>>> the end
>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>> iterating
>>>>>>>>> + * over ptes.
>>>>>>>>> + *
>>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>>> + */
>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
>>>>>>>>> addr,
>>>>>>>>> + unsigned long end)
>>>>>>>>> +{
>>>>>>>>> + return 1;
>>>>>>>>> +}
>>>>>>>>> +#endif
>>>>>>>>
>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>
>>>>>>>> Was there no way to have some basic batching mechanism that doesn't require
>>>>>>>> arch
>>>>>>>> specifics?
>>>>>>>
>>>>>>> I tried a bunch of things but ultimately the way I've done it was the only
>>>>>>> way
>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>
>>>>>>> My original v3 posting was costing 5% extra and even my first attempt at an
>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1 still
>>>>>>> cost an extra 3%.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>
>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>> * Check that PFN is consecutive
>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>
>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get the
>>>>>>> regression under 4% with this. Further along the series I spent a lot of time
>>>>>>> having to fiddle with the arm64 implementation; every conditional and every
>>>>>>> memory read (even when in cache) was a problem. There is just so little in
>>>>>>> the
>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>>> Apple
>>>>>>> M2).
>>>>>>>
>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to
>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to
>>>>>>> play
>>>>>>> safe and ensure the common order-0 case doesn't regress, as you previously
>>>>>>> suggested.
>>>>>>>
>>>>>>
>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>> series. I
>>>>>> implemented very generic and simple batching for large folios (all PTE bits
>>>>>> except the PFN have to match).
>>>>>>
>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
>>>>>> 4210R CPU.
>>>>>>
>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>
>>>>>> -> Around 1.7 % faster
>>>>>>
>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>
>>>>>> -> Around 36.3 % faster
>>>>>
>>>>> Well I guess that shows me :)
>>>>>
>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>
>>>>
>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>
>>> I've just been trying to compile and noticed this. Will take a look at your
>>> update.
>>>
>>> But upon review, I've noticed the part that I think makes this difficult for
>>> arm64 with the contpte optimization; You are calling ptep_get() for every pte in
>>> the batch. While this is functionally correct, once arm64 has the contpte
>>> changes, its ptep_get() has to read every pte in the contpte block in order to
>>> gather the access and dirty bits. So if your batching function ends up wealking
>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>> function; this allows the core-mm to skip to the end of the contpte block and
>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead
>>> of 256.
>>>
>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit
>>> gathering. But we have a similar problem in zap_pte_range() and that function
>>> needs the dirty bit to update the folio. So it doesn't work there. (see patch 3
>>> in my series).
>>>
>>> I guess you are going to say that we should combine both approaches, so that
>>> your batching loop can skip forward an arch-provided number of ptes? That would
>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>> achieve :). Anyway, I'll spend some time playing with it today.
>>
>> You can overwrite the function or add special-casing internally, yes.
>>
>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()" and it
>> doesn't do any of that besides preparing for some arm64 work.
>>
>
> Well it allows an arch to opt-in to batching. But I see your point.
>
> How do you want to handle your patches? Do you want to clean them up and I'll
> base my stuff on top? Or do you want me to take them and sort it all out?
Whatever you prefer, it was mostly a quick prototype to see if we can
achieve decent performance.
I can fixup the arch thingies (most should be easy, some might require a
custom pte_next_pfn()) and you can focus on getting cont-pte sorted out
on top [I assume that's what you want to work on :) ].
>
> As I see it at the moment, I would keep your folio_pte_batch() always core, but
> in subsequent patch, have it use pte_batch_remaining() (the arch function I have
> in my series, which defaults to one).
Just double-checking, how would it use pte_batch_remaining() ?
> Then do a similar thing to what you have
> done for fork in zap_pte_range() - also using folio_pte_batch(). Then lay my
> series on top.
Yes, we should probably try to handle the zapping part similarly: make
it benefit all archs first, then special-case on cont-pte. I can help
there as well.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists