lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpGyraPAFpxoJ-ZRsJf9pWe3jno4_VjcbxHPthwLjc9DZw@mail.gmail.com>
Date: Fri, 10 Jan 2025 12:40:28 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: "Liam R. Howlett" <Liam.Howlett@...cle.com>, Suren Baghdasaryan <surenb@...gle.com>, 
	akpm@...ux-foundation.org, peterz@...radead.org, willy@...radead.org, 
	lorenzo.stoakes@...cle.com, mhocko@...e.com, vbabka@...e.cz, 
	hannes@...xchg.org, mjguzik@...il.com, oliver.sang@...el.com, 
	mgorman@...hsingularity.net, david@...hat.com, peterx@...hat.com, 
	oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org, brauner@...nel.org, 
	dhowells@...hat.com, hdanton@...a.com, hughd@...gle.com, 
	lokeshgidra@...gle.com, minchan@...gle.com, jannh@...gle.com, 
	shakeel.butt@...ux.dev, souravpanda@...gle.com, pasha.tatashin@...een.com, 
	klarasmodin@...il.com, richard.weiyang@...il.com, corbet@....net, 
	linux-doc@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	kernel-team@...roid.com
Subject: Re: [PATCH v8 15/16] mm: make vma cache SLAB_TYPESAFE_BY_RCU

On Fri, Jan 10, 2025 at 11:51 AM 'Liam R. Howlett' via kernel-team
<kernel-team@...roid.com> wrote:
>
> * Suren Baghdasaryan <surenb@...gle.com> [250110 14:08]:
> > On Fri, Jan 10, 2025 at 9:48 AM Liam R. Howlett <Liam.Howlett@...cle.com> wrote:
> > >
> > > * Suren Baghdasaryan <surenb@...gle.com> [250108 21:31]:
> > > > To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that
> > > > object reuse before RCU grace period is over will be detected by
> > > > lock_vma_under_rcu().
> > > > Current checks are sufficient as long as vma is detached before it is
> > > > freed. The only place this is not currently happening is in exit_mmap().
> > > > Add the missing vma_mark_detached() in exit_mmap().
> > > > Another issue which might trick lock_vma_under_rcu() during vma reuse
> > > > is vm_area_dup(), which copies the entire content of the vma into a new
> > > > one, overriding new vma's vm_refcnt and temporarily making it appear as
> > > > attached. This might trick a racing lock_vma_under_rcu() to operate on
> > > > a reused vma if it found the vma before it got reused. To prevent this
> > > > situation, we should ensure that vm_refcnt stays at detached state (0)
> > > > when it is copied and advances to attached state only after it is added
> > > > into the vma tree. Introduce vma_copy() which preserves new vma's
> > > > vm_refcnt and use it in vm_area_dup(). Since all vmas are in detached
> > > > state with no current readers when they are freed, lock_vma_under_rcu()
> > > > will not be able to take vm_refcnt after vma got detached even if vma
> > > > is reused.
> > > > Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate
> > > > vm_area_struct reuse and will minimize the number of call_rcu() calls.
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > > > ---
> > > >  include/linux/mm.h               |  2 -
> > > >  include/linux/mm_types.h         | 10 +++--
> > > >  include/linux/slab.h             |  6 ---
> > > >  kernel/fork.c                    | 72 ++++++++++++++++++++------------
> > > >  mm/mmap.c                        |  3 +-
> > > >  mm/vma.c                         | 11 ++---
> > > >  mm/vma.h                         |  2 +-
> > > >  tools/testing/vma/vma_internal.h |  7 +---
> > > >  8 files changed, 59 insertions(+), 54 deletions(-)
> > > >
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > > index 1d6b1563b956..a674558e4c05 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_code,
> > > >  struct vm_area_struct *vm_area_alloc(struct mm_struct *);
> > > >  struct vm_area_struct *vm_area_dup(struct vm_area_struct *);
> > > >  void vm_area_free(struct vm_area_struct *);
> > > > -/* Use only if VMA has no other users */
> > > > -void __vm_area_free(struct vm_area_struct *vma);
> > > >
> > > >  #ifndef CONFIG_MMU
> > > >  extern struct rb_root nommu_region_tree;
> > > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > > > index 2d83d79d1899..93bfcd0c1fde 100644
> > > > --- a/include/linux/mm_types.h
> > > > +++ b/include/linux/mm_types.h
> > > > @@ -582,6 +582,12 @@ static inline void *folio_get_private(struct folio *folio)
> > > >
> > > >  typedef unsigned long vm_flags_t;
> > > >
> > > > +/*
> > > > + * freeptr_t represents a SLUB freelist pointer, which might be encoded
> > > > + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled.
> > > > + */
> > > > +typedef struct { unsigned long v; } freeptr_t;
> > > > +
> > > >  /*
> > > >   * A region containing a mapping of a non-memory backed file under NOMMU
> > > >   * conditions.  These are held in a global tree and are pinned by the VMAs that
> > > > @@ -695,9 +701,7 @@ struct vm_area_struct {
> > > >                       unsigned long vm_start;
> > > >                       unsigned long vm_end;
> > > >               };
> > > > -#ifdef CONFIG_PER_VMA_LOCK
> > > > -             struct rcu_head vm_rcu; /* Used for deferred freeing. */
> > > > -#endif
> > > > +             freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */
> > > >       };
> > > >
> > > >       /*
> > > > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > > > index 10a971c2bde3..681b685b6c4e 100644
> > > > --- a/include/linux/slab.h
> > > > +++ b/include/linux/slab.h
> > > > @@ -234,12 +234,6 @@ enum _slab_flag_bits {
> > > >  #define SLAB_NO_OBJ_EXT              __SLAB_FLAG_UNUSED
> > > >  #endif
> > > >
> > > > -/*
> > > > - * freeptr_t represents a SLUB freelist pointer, which might be encoded
> > > > - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled.
> > > > - */
> > > > -typedef struct { unsigned long v; } freeptr_t;
> > > > -
> > > >  /*
> > > >   * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
> > > >   *
> > > > diff --git a/kernel/fork.c b/kernel/fork.c
> > > > index 9d9275783cf8..770b973a099c 100644
> > > > --- a/kernel/fork.c
> > > > +++ b/kernel/fork.c
> > > > @@ -449,6 +449,41 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
> > > >       return vma;
> > > >  }
> > > >
> > >
> > > There exists a copy_vma() which copies the vma to a new area in the mm
> > > in rmap.  Naming this vma_copy() is confusing :)
> > >
> > > It might be better to just put this code in the vm_area_dup() or call it
> > > __vm_area_dup(), or __vma_dup() ?
> >
> > Hmm. It's not really duplicating a vma but copying its content (no
> > allocation). How about __vm_area_copy() to indicate it is copying
> > vm_area_struct content?
>
>
> Sorry, I missed this.  it's not copying all the content either.
>
> vm_area_init_dup() maybe?

Ah, how about vm_area_init_from(src, dest)?

>
> Considering the scope of the series, I'm not sure I want to have a
> bike shed conversation.. But I also don't want copy_<foo> <foo>_copy
> confusion in the future.
>
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@...roid.com.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ