lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 20 Dec 2023 12:36:27 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>,
 Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Ard Biesheuvel <ardb@...nel.org>, Marc Zyngier <maz@...nel.org>,
 Oliver Upton <oliver.upton@...ux.dev>, James Morse <james.morse@....com>,
 Suzuki K Poulose <suzuki.poulose@....com>, Zenghui Yu
 <yuzenghui@...wei.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>,
 Alexander Potapenko <glider@...gle.com>,
 Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>,
 Vincenzo Frascino <vincenzo.frascino@....com>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Anshuman Khandual <anshuman.khandual@....com>,
 Matthew Wilcox <willy@...radead.org>, Yu Zhao <yuzhao@...gle.com>,
 Mark Rutland <mark.rutland@....com>, Kefeng Wang
 <wangkefeng.wang@...wei.com>, John Hubbard <jhubbard@...dia.com>,
 Zi Yan <ziy@...dia.com>, Barry Song <21cnbao@...il.com>,
 Alistair Popple <apopple@...dia.com>, Yang Shi <shy828301@...il.com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork()

On 20.12.23 12:28, Ryan Roberts wrote:
> On 20/12/2023 10:56, David Hildenbrand wrote:
>> On 20.12.23 11:41, Ryan Roberts wrote:
>>> On 20/12/2023 10:16, David Hildenbrand wrote:
>>>> On 20.12.23 11:11, Ryan Roberts wrote:
>>>>> On 20/12/2023 09:54, David Hildenbrand wrote:
>>>>>> On 20.12.23 10:51, Ryan Roberts wrote:
>>>>>>> On 20/12/2023 09:17, David Hildenbrand wrote:
>>>>>>>> On 19.12.23 18:42, Ryan Roberts wrote:
>>>>>>>>> On 19/12/2023 17:22, David Hildenbrand wrote:
>>>>>>>>>> On 19.12.23 09:30, Ryan Roberts wrote:
>>>>>>>>>>> On 18/12/2023 17:47, David Hildenbrand wrote:
>>>>>>>>>>>> On 18.12.23 11:50, Ryan Roberts wrote:
>>>>>>>>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given
>>>>>>>>>>>>> batch is determined by the architecture with the new helper,
>>>>>>>>>>>>> pte_batch_remaining(), and maps a physically contiguous block of
>>>>>>>>>>>>> memory,
>>>>>>>>>>>>> all belonging to the same folio. A pte batch is then write-protected in
>>>>>>>>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is
>>>>>>>>>>>>> set in one go in the child using the new helper, set_ptes_full().
>>>>>>>>>>>>>
>>>>>>>>>>>>> The primary motivation for this change is to reduce the number of tlb
>>>>>>>>>>>>> maintenance operations that the arm64 backend has to perform during
>>>>>>>>>>>>> fork, as it is about to add transparent support for the "contiguous
>>>>>>>>>>>>> bit"
>>>>>>>>>>>>> in its ptes. By write-protecting the parent using the new
>>>>>>>>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend
>>>>>>>>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive,
>>>>>>>>>>>>> when all ptes in the range are being write-protected. Similarly, by
>>>>>>>>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the
>>>>>>>>>>>>> child, the backend does not need to fold a contiguous range once they
>>>>>>>>>>>>> are all populated - they can be initially populated as a contiguous
>>>>>>>>>>>>> range in the first place.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This code is very performance sensitive, and a significant amount of
>>>>>>>>>>>>> effort has been put into not regressing performance for the order-0
>>>>>>>>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1,
>>>>>>>>>>>>> which enables the compiler to simplify the extra loops that are added
>>>>>>>>>>>>> for batching and produce code that is equivalent (and equally
>>>>>>>>>>>>> performant) as the previous implementation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This change addresses the core-mm refactoring only and a separate
>>>>>>>>>>>>> change
>>>>>>>>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and
>>>>>>>>>>>>> set_ptes_full() in the arm64 backend to realize the performance
>>>>>>>>>>>>> improvement as part of the work to enable contpte mappings.
>>>>>>>>>>>>>
>>>>>>>>>>>>> To ensure the arm64 is performant once implemented, this change is very
>>>>>>>>>>>>> careful to only call ptep_get() once per pte batch.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The following microbenchmark results demonstate that there is no
>>>>>>>>>>>>> significant performance change after this patch. Fork is called in a
>>>>>>>>>>>>> tight loop in a process with 1G of populated memory and the time for
>>>>>>>>>>>>> the
>>>>>>>>>>>>> function to execute is measured. 100 iterations per run, 8 runs
>>>>>>>>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
>>>>>>>>>>>>> performed for case where 1G memory is comprised of order-0 folios and
>>>>>>>>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster,
>>>>>>>>>>>>> positive is slower, compared to baseline upon which the series is
>>>>>>>>>>>>> based:
>>>>>>>>>>>>>
>>>>>>>>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
>>>>>>>>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
>>>>>>>>>>>>>
>>>>>>>>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
>>>>>>>>>>>>> | fork          |-------------------|-------------------|
>>>>>>>>>>>>> | microbench    |    mean |   stdev |    mean |   stdev |
>>>>>>>>>>>>> |---------------|---------|---------|---------|---------|
>>>>>>>>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
>>>>>>>>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tested-by: John Hubbard <jhubbard@...dia.com>
>>>>>>>>>>>>> Reviewed-by: Alistair Popple <apopple@...dia.com>
>>>>>>>>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>         include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>>>>>>>>>>>>>         mm/memory.c             | 92
>>>>>>>>>>>>> ++++++++++++++++++++++++++---------------
>>>>>>>>>>>>>         2 files changed, 139 insertions(+), 33 deletions(-)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>>>>>>>>>>> index af7639c3b0a3..db93fb81465a 100644
>>>>>>>>>>>>> --- a/include/linux/pgtable.h
>>>>>>>>>>>>> +++ b/include/linux/pgtable.h
>>>>>>>>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>>>>>>>>>>>>>         #define arch_flush_lazy_mmu_mode()    do {} while (0)
>>>>>>>>>>>>>         #endif
>>>>>>>>>>>>>         +#ifndef pte_batch_remaining
>>>>>>>>>>>>> +/**
>>>>>>>>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch
>>>>>>>>>>>>> boundary.
>>>>>>>>>>>>> + * @pte: Page table entry for the first page.
>>>>>>>>>>>>> + * @addr: Address of the first page.
>>>>>>>>>>>>> + * @end: Batch ceiling (e.g. end of vma).
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous
>>>>>>>>>>>>> batch of
>>>>>>>>>>>>> ptes.
>>>>>>>>>>>>> + * In such cases, this function returns the remaining number of
>>>>>>>>>>>>> pages to
>>>>>>>>>>>>> the end
>>>>>>>>>>>>> + * of the current batch, as defined by addr. This can be useful when
>>>>>>>>>>>>> iterating
>>>>>>>>>>>>> + * over ptes.
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * May be overridden by the architecture, else batch size is always 1.
>>>>>>>>>>>>> + */
>>>>>>>>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned
>>>>>>>>>>>>> long
>>>>>>>>>>>>> addr,
>>>>>>>>>>>>> +                        unsigned long end)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> +    return 1;
>>>>>>>>>>>>> +}
>>>>>>>>>>>>> +#endif
>>>>>>>>>>>>
>>>>>>>>>>>> It's a shame we now lose the optimization for all other archtiectures.
>>>>>>>>>>>>
>>>>>>>>>>>> Was there no way to have some basic batching mechanism that doesn't
>>>>>>>>>>>> require
>>>>>>>>>>>> arch
>>>>>>>>>>>> specifics?
>>>>>>>>>>>
>>>>>>>>>>> I tried a bunch of things but ultimately the way I've done it was the
>>>>>>>>>>> only
>>>>>>>>>>> way
>>>>>>>>>>> to reduce the order-0 fork regression to 0.
>>>>>>>>>>>
>>>>>>>>>>> My original v3 posting was costing 5% extra and even my first attempt
>>>>>>>>>>> at an
>>>>>>>>>>> arch-specific version that didn't resolve to a compile-time constant 1
>>>>>>>>>>> still
>>>>>>>>>>> cost an extra 3%.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I'd have thought that something very basic would have worked like:
>>>>>>>>>>>>
>>>>>>>>>>>> * Check if PTE is the same when setting the PFN to 0.
>>>>>>>>>>>> * Check that PFN is consecutive
>>>>>>>>>>>> * Check that all PFNs belong to the same folio
>>>>>>>>>>>
>>>>>>>>>>> I haven't tried this exact approach, but I'd be surprised if I can get
>>>>>>>>>>> the
>>>>>>>>>>> regression under 4% with this. Further along the series I spent a lot of
>>>>>>>>>>> time
>>>>>>>>>>> having to fiddle with the arm64 implementation; every conditional and
>>>>>>>>>>> every
>>>>>>>>>>> memory read (even when in cache) was a problem. There is just so
>>>>>>>>>>> little in
>>>>>>>>>>> the
>>>>>>>>>>> inner loop that every instruction matters. (At least on Ampere Altra and
>>>>>>>>>>> Apple
>>>>>>>>>>> M2).
>>>>>>>>>>>
>>>>>>>>>>> Of course if you're willing to pay that 4-5% for order-0 then the
>>>>>>>>>>> benefit to
>>>>>>>>>>> order-9 is around 10% in my measurements. Personally though, I'd
>>>>>>>>>>> prefer to
>>>>>>>>>>> play
>>>>>>>>>>> safe and ensure the common order-0 case doesn't regress, as you
>>>>>>>>>>> previously
>>>>>>>>>>> suggested.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I just hacked something up, on top of my beloved rmap cleanup/batching
>>>>>>>>>> series. I
>>>>>>>>>> implemented very generic and simple batching for large folios (all PTE
>>>>>>>>>> bits
>>>>>>>>>> except the PFN have to match).
>>>>>>>>>>
>>>>>>>>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R)
>>>>>>>>>> Silver
>>>>>>>>>> 4210R CPU.
>>>>>>>>>>
>>>>>>>>>> order-0: 0.014210 -> 0.013969
>>>>>>>>>>
>>>>>>>>>> -> Around 1.7 % faster
>>>>>>>>>>
>>>>>>>>>> order-9: 0.014373 -> 0.009149
>>>>>>>>>>
>>>>>>>>>> -> Around 36.3 % faster
>>>>>>>>>
>>>>>>>>> Well I guess that shows me :)
>>>>>>>>>
>>>>>>>>> I'll do a review and run the tests on my HW to see if it concurs.
>>>>>>>>
>>>>>>>>
>>>>>>>> I pushed a simple compile fixup (we need pte_next_pfn()).
>>>>>>>
>>>>>>> I've just been trying to compile and noticed this. Will take a look at your
>>>>>>> update.
>>>>>>>
>>>>>>> But upon review, I've noticed the part that I think makes this difficult for
>>>>>>> arm64 with the contpte optimization; You are calling ptep_get() for every
>>>>>>> pte in
>>>>>>> the batch. While this is functionally correct, once arm64 has the contpte
>>>>>>> changes, its ptep_get() has to read every pte in the contpte block in
>>>>>>> order to
>>>>>>> gather the access and dirty bits. So if your batching function ends up
>>>>>>> wealking
>>>>>>> a 16 entry contpte block, that will cause 16 x 16 reads, which kills
>>>>>>> performance. That's why I added the arch-specific pte_batch_remaining()
>>>>>>> function; this allows the core-mm to skip to the end of the contpte block and
>>>>>>> avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s
>>>>>>> instead
>>>>>>> of 256.
>>>>>>>
>>>>>>> I considered making a ptep_get_noyoungdirty() variant, which would avoid the
>>>>>>> bit
>>>>>>> gathering. But we have a similar problem in zap_pte_range() and that function
>>>>>>> needs the dirty bit to update the folio. So it doesn't work there. (see
>>>>>>> patch 3
>>>>>>> in my series).
>>>>>>>
>>>>>>> I guess you are going to say that we should combine both approaches, so that
>>>>>>> your batching loop can skip forward an arch-provided number of ptes? That
>>>>>>> would
>>>>>>> certainly work, but feels like an orthogonal change to what I'm trying to
>>>>>>> achieve :). Anyway, I'll spend some time playing with it today.
>>>>>>
>>>>>> You can overwrite the function or add special-casing internally, yes.
>>>>>>
>>>>>> Right now, your patch is called "mm: Batch-copy PTE ranges during fork()"
>>>>>> and it
>>>>>> doesn't do any of that besides preparing for some arm64 work.
>>>>>>
>>>>>
>>>>> Well it allows an arch to opt-in to batching. But I see your point.
>>>>>
>>>>> How do you want to handle your patches? Do you want to clean them up and I'll
>>>>> base my stuff on top? Or do you want me to take them and sort it all out?
>>>>
>>>> Whatever you prefer, it was mostly a quick prototype to see if we can achieve
>>>> decent performance.
>>>
>>> I'm about to run it on Altra and M2. But I assume it will show similar results.
> 
> OK results in, not looking great, which aligns with my previous experience. That
> said, I'm seeing some "BUG: Bad page state in process gmain  pfn:12a094" so
> perhaps these results are not valid...

I didn't see that so far on x86, maybe related to the PFN fixup?

> 
> 100 iterations per run, 8 runs over 2 reboots. Positive is slower than baseline,
> negative is faster:
> 
> Fork, order-0, Apple M2 VM:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.8% |
> | hugetlb-rmap-cleanups |       1.3% |      2.0% |
> | fork-batching         |       3.5% |      1.2% |
> 
> Fork, order-9, Apple M2 VM:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.8% |
> | hugetlb-rmap-cleanups |       0.9% |      0.9% |
> | fork-batching         |     -35.6% |      2.0% |
> 
> Fork, order-0, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.7% |
> | hugetlb-rmap-cleanups |       3.2% |      0.7% |
> | fork-batching         |       5.5% |      1.1% |
> 
> Fork, order-9, Ampere Altra:
> | kernel                |   mean_rel |   std_rel |
> |:----------------------|-----------:|----------:|
> | mm-unstable           |       0.0% |      0.1% |
> | hugetlb-rmap-cleanups |       0.5% |      0.1% |
> | fork-batching         |     -10.3% |      0.1% |

It's weird that an effective folio_test_large() should affect 
performance that much. So far I haven't seen that behavior on x86, I 
wodner why arm64 should behave here differently (also for the rmap 
cleanups). Code layout/size?

I'll dig it up again and test on x86 once more.

[...]

> 
> Yeah that would probably work. But we need to be careful for the case where
> start_ptep is in the middle of a contpte block (which can happen - due to some
> vma splitting operations, we can have a contpte block that spans 2 vmas). So
> nr_cont_ptes() needs to either be spec'ed to only return the contpte size if
> start_ptep is pointing to the front of the block, and all other times, return 1,
> or it needs to return the number of ptes remaining to the end of the block (as
> it does in my v4).
> 
> But I guess we need to get to the bottom of my arm64 perf numbers first... I'll
> debug those bugs and rerun.

Yes, I'll dig into it on x86 once more.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ