lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1f24mNZ5=HubBNcnuabzWAEqAALnnKVd3N9D5+jNBjxO6p+w@mail.gmail.com>
Date: Fri, 10 May 2024 17:19:12 +0800
From: Lance Yang <ioworker0@...il.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Bang Li <libang.li@...group.com>, akpm@...ux-foundation.org, 
	chenhuacai@...nel.org, tsbogend@...ha.franken.de, paul.walmsley@...ive.com, 
	palmer@...belt.com, chris@...kel.net, jcmvbkbc@...il.com, 
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, loongarch@...ts.linux.dev, 
	linux-riscv@...ts.infradead.org, david@...hat.com, libang.linux@...il.com
Subject: Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()

On Fri, May 10, 2024 at 5:05 PM Ryan Roberts <ryan.roberts@....com> wrote:
>
> On 06/05/2024 16:51, Bang Li wrote:
> > After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
> > multi-size THP"), it may need to batch update tlb of an address range
> > through the update_mmu_tlb function. We can simplify this operation by
> > adding the update_mmu_tlb_range function, which may also reduce the
> > execution of some unnecessary code in some architectures.
> >
> > Signed-off-by: Bang Li <libang.li@...group.com>
> > ---
> >  include/linux/pgtable.h | 8 ++++++++
> >  mm/memory.c             | 4 +---
> >  2 files changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 18019f037bae..869bfe6054f1 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
> >  #define __HAVE_ARCH_UPDATE_MMU_TLB
> >  #endif
>
> Given you are implementing update_mmu_tlb_range() in all the arches that
> currently override update_mmu_tlb() I wonder if it would be cleaner to remove
> update_mmu_tlb() from all those arches, and define generically, removing the
> ability for arches to override it:

Sounds great! Let's get it done.

>
> static inline void update_mmu_tlb(struct vm_area_struct *vma,
>                                 unsigned long address, pte_t *ptep)
> {
>         update_mmu_tlb_range(vma, address, ptep, 1);
> }
>
> >
> > +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> > +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> > +                             unsigned long address, pte_t *ptep, unsigned int nr)
> > +{
> > +}
> > +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> > +#endif
>
> Then you could use the modern override scheme as Lance suggested and you won't
> have any confusion with __HAVE_ARCH_UPDATE_MMU_TLB because it won't exist anymore.

+1. It might be better to use the modern override scheme :)

Thanks,
Lance

>
> > +
> >  /*
> >   * Some architectures may be able to avoid expensive synchronization
> >   * primitives when modifications are made to PTE's which are already
> > diff --git a/mm/memory.c b/mm/memory.c
> > index eea6e4984eae..2d53e29cf76e 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >       vm_fault_t ret = 0;
> >       int nr_pages = 1;
> >       pte_t entry;
> > -     int i;
> >
> >       /* File mapping without ->vm_ops ? */
> >       if (vma->vm_flags & VM_SHARED)
> > @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >               update_mmu_tlb(vma, addr, vmf->pte);
> >               goto release;
> >       } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> > -             for (i = 0; i < nr_pages; i++)
> > -                     update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> > +             update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
>
> I certainly agree that this will be a useful helper to have. I expect there will
> be more users in future.
>
> >               goto release;
> >       }
> >
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ