lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h6kta3ap.fsf@nvdebian.thelocal>
Date:   Fri, 08 Dec 2023 12:30:40 +1100
From:   Alistair Popple <apopple@...dia.com>
To:     Ryan Roberts <ryan.roberts@....com>
Cc:     Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Ard Biesheuvel <ardb@...nel.org>,
        Marc Zyngier <maz@...nel.org>,
        Oliver Upton <oliver.upton@...ux.dev>,
        James Morse <james.morse@....com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Zenghui Yu <yuzenghui@...wei.com>,
        Andrey Ryabinin <ryabinin.a.a@...il.com>,
        Alexander Potapenko <glider@...gle.com>,
        Andrey Konovalov <andreyknvl@...il.com>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Vincenzo Frascino <vincenzo.frascino@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Matthew Wilcox <willy@...radead.org>,
        Yu Zhao <yuzhao@...gle.com>,
        Mark Rutland <mark.rutland@....com>,
        David Hildenbrand <david@...hat.com>,
        Kefeng Wang <wangkefeng.wang@...wei.com>,
        John Hubbard <jhubbard@...dia.com>, Zi Yan <ziy@...dia.com>,
        Barry Song <21cnbao@...il.com>, Yang Shi <shy828301@...il.com>,
        linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 02/15] mm: Batch-clear PTE ranges during zap_pte_range()


Ryan Roberts <ryan.roberts@....com> writes:

> Convert zap_pte_range() to clear a set of ptes in a batch. A given batch
> maps a physically contiguous block of memory, all belonging to the same
> folio. This will likely improve performance by a tiny amount due to
> removing duplicate calls to mark the folio dirty and accessed. And also
> provides us with a future opportunity to batch the rmap removal.
>
> However, the primary motivation for this change is to reduce the number
> of tlb maintenance operations that the arm64 backend has to perform
> during exit and other syscalls that cause zap_pte_range() (e.g. munmap,
> madvise(DONTNEED), etc.), as it is about to add transparent support for
> the "contiguous bit" in its ptes. By clearing ptes using the new
> clear_ptes() API, the backend doesn't have to perform an expensive
> unfold operation when a PTE being cleared is part of a contpte block.
> Instead it can just clear the whole block immediately.
>
> This change addresses the core-mm refactoring only, and introduces
> clear_ptes() with a default implementation that calls
> ptep_get_and_clear_full() for each pte in the range. Note that this API
> returns the pte at the beginning of the batch, but with the dirty and
> young bits set if ANY of the ptes in the cleared batch had those bits
> set; this information is applied to the folio by the core-mm. Given the
> batch is garranteed to cover only a single folio, collapsing this state

Nit: s/garranteed/guaranteed/

> does not lose any useful information.
>
> A separate change will implement clear_ptes() in the arm64 backend to
> realize the performance improvement as part of the work to enable
> contpte mappings.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> ---
>  include/asm-generic/tlb.h |  9 ++++++
>  include/linux/pgtable.h   | 26 ++++++++++++++++
>  mm/memory.c               | 63 ++++++++++++++++++++++++++-------------
>  mm/mmu_gather.c           | 14 +++++++++
>  4 files changed, 92 insertions(+), 20 deletions(-)

<snip>

> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 4f559f4ddd21..57b4d5f0dfa4 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -47,6 +47,20 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
>  	return true;
>  }
>  
> +unsigned int tlb_get_guaranteed_space(struct mmu_gather *tlb)
> +{
> +	struct mmu_gather_batch *batch = tlb->active;
> +	unsigned int nr_next = 0;
> +
> +	/* Allocate next batch so we can guarrantee at least one batch. */
> +	if (tlb_next_batch(tlb)) {
> +		tlb->active = batch;

Rather than calling tlb_next_batch(tlb) and then undoing some of what it
does I think it would be clearer to factor out the allocation part of
tlb_next_batch(tlb) into a separate function (eg. tlb_alloc_batch) that
you can call from both here and tlb_next_batch().

Otherwise I think this overall direction looks better than trying to
play funny games in the arch layer as it's much clearer what's going on
to core-mm code.

 - Alistair

> +		nr_next = batch->next->max;
> +	}
> +
> +	return batch->max - batch->nr + nr_next;
> +}
> +
>  #ifdef CONFIG_SMP
>  static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_struct *vma)
>  {

Powered by blists - more mailing lists