lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0bef5423-6eea-446b-8854-980e9c23a948@redhat.com>
Date: Mon, 18 Dec 2023 18:47:17 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>,
 Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Ard Biesheuvel <ardb@...nel.org>, Marc Zyngier <maz@...nel.org>,
 Oliver Upton <oliver.upton@...ux.dev>, James Morse <james.morse@....com>,
 Suzuki K Poulose <suzuki.poulose@....com>, Zenghui Yu
 <yuzenghui@...wei.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>,
 Alexander Potapenko <glider@...gle.com>,
 Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>,
 Vincenzo Frascino <vincenzo.frascino@....com>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Anshuman Khandual <anshuman.khandual@....com>,
 Matthew Wilcox <willy@...radead.org>, Yu Zhao <yuzhao@...gle.com>,
 Mark Rutland <mark.rutland@....com>, Kefeng Wang
 <wangkefeng.wang@...wei.com>, John Hubbard <jhubbard@...dia.com>,
 Zi Yan <ziy@...dia.com>, Barry Song <21cnbao@...il.com>,
 Alistair Popple <apopple@...dia.com>, Yang Shi <shy828301@...il.com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork()

On 18.12.23 11:50, Ryan Roberts wrote:
> Convert copy_pte_range() to copy a batch of ptes in one go. A given
> batch is determined by the architecture with the new helper,
> pte_batch_remaining(), and maps a physically contiguous block of memory,
> all belonging to the same folio. A pte batch is then write-protected in
> one go in the parent using the new helper, ptep_set_wrprotects() and is
> set in one go in the child using the new helper, set_ptes_full().
> 
> The primary motivation for this change is to reduce the number of tlb
> maintenance operations that the arm64 backend has to perform during
> fork, as it is about to add transparent support for the "contiguous bit"
> in its ptes. By write-protecting the parent using the new
> ptep_set_wrprotects() (note the 's' at the end) function, the backend
> can avoid having to unfold contig ranges of PTEs, which is expensive,
> when all ptes in the range are being write-protected. Similarly, by
> using set_ptes_full() rather than set_pte_at() to set up ptes in the
> child, the backend does not need to fold a contiguous range once they
> are all populated - they can be initially populated as a contiguous
> range in the first place.
> 
> This code is very performance sensitive, and a significant amount of
> effort has been put into not regressing performance for the order-0
> folio case. By default, pte_batch_remaining() is compile constant 1,
> which enables the compiler to simplify the extra loops that are added
> for batching and produce code that is equivalent (and equally
> performant) as the previous implementation.
> 
> This change addresses the core-mm refactoring only and a separate change
> will implement pte_batch_remaining(), ptep_set_wrprotects() and
> set_ptes_full() in the arm64 backend to realize the performance
> improvement as part of the work to enable contpte mappings.
> 
> To ensure the arm64 is performant once implemented, this change is very
> careful to only call ptep_get() once per pte batch.
> 
> The following microbenchmark results demonstate that there is no
> significant performance change after this patch. Fork is called in a
> tight loop in a process with 1G of populated memory and the time for the
> function to execute is measured. 100 iterations per run, 8 runs
> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
> performed for case where 1G memory is comprised of order-0 folios and
> case where comprised of pte-mapped order-9 folios. Negative is faster,
> positive is slower, compared to baseline upon which the series is based:
> 
> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
> | fork          |-------------------|-------------------|
> | microbench    |    mean |   stdev |    mean |   stdev |
> |---------------|---------|---------|---------|---------|
> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |
> 
> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
> | fork          |-------------------|-------------------|
> | microbench    |    mean |   stdev |    mean |   stdev |
> |---------------|---------|---------|---------|---------|
> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |
> 
> Tested-by: John Hubbard <jhubbard@...dia.com>
> Reviewed-by: Alistair Popple <apopple@...dia.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> ---
>   include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
>   mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
>   2 files changed, 139 insertions(+), 33 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index af7639c3b0a3..db93fb81465a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
>   #define arch_flush_lazy_mmu_mode()	do {} while (0)
>   #endif
>   
> +#ifndef pte_batch_remaining
> +/**
> + * pte_batch_remaining - Number of pages from addr to next batch boundary.
> + * @pte: Page table entry for the first page.
> + * @addr: Address of the first page.
> + * @end: Batch ceiling (e.g. end of vma).
> + *
> + * Some architectures (arm64) can efficiently modify a contiguous batch of ptes.
> + * In such cases, this function returns the remaining number of pages to the end
> + * of the current batch, as defined by addr. This can be useful when iterating
> + * over ptes.
> + *
> + * May be overridden by the architecture, else batch size is always 1.
> + */
> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr,
> +						unsigned long end)
> +{
> +	return 1;
> +}
> +#endif

It's a shame we now lose the optimization for all other archtiectures.

Was there no way to have some basic batching mechanism that doesn't 
require arch specifics?

I'd have thought that something very basic would have worked like:

* Check if PTE is the same when setting the PFN to 0.
* Check that PFN is consecutive
* Check that all PFNs belong to the same folio

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ