[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFLgO5E23xcXMVF=ADz9aNJ-OowHSsC48iF+y+P5fMoVQ@mail.gmail.com>
Date: Wed, 20 Nov 2024 07:54:16 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: akpm@...ux-foundation.org, willy@...radead.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, mhocko@...e.com, hannes@...xchg.org,
mjguzik@...il.com, oliver.sang@...el.com, mgorman@...hsingularity.net,
david@...hat.com, peterx@...hat.com, oleg@...hat.com, dave@...olabs.net,
paulmck@...nel.org, brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com,
hughd@...gle.com, minchan@...gle.com, jannh@...gle.com,
shakeel.butt@...ux.dev, souravpanda@...gle.com, pasha.tatashin@...een.com,
corbet@....net, linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v4 4/5] mm: make vma cache SLAB_TYPESAFE_BY_RCU
On Wed, Nov 20, 2024 at 2:16 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 11/20/24 01:08, Suren Baghdasaryan wrote:
> > To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that
> > object reuse before RCU grace period is over will be detected inside
> > lock_vma_under_rcu().
> > lock_vma_under_rcu() enters RCU read section, finds the vma at the
> > given address, locks the vma and checks if it got detached or remapped
> > to cover a different address range. These last checks are there
> > to ensure that the vma was not modified after we found it but before
> > locking it.
> > vma reuse introduces several new possibilities:
> > 1. vma can be reused after it was found but before it is locked;
> > 2. vma can be reused and reinitialized (including changing its vm_mm)
> > while being locked in vma_start_read();
> > 3. vma can be reused and reinitialized after it was found but before
> > it is locked, then attached at a new address or to a new mm while being
> > read-locked;
> > For case #1 current checks will help detecting cases when:
> > - vma was reused but not yet added into the tree (detached check)
> > - vma was reused at a different address range (address check);
> > We are missing the check for vm_mm to ensure the reused vma was not
> > attached to a different mm. This patch adds the missing check.
> > For case #2, we pass mm to vma_start_read() to prevent access to
> > unstable vma->vm_mm.
>
> So we may now be looking at different mm's mm_lock_seq.sequence and return a
> false unlocked result, right? I guess the mm validation in
> lock_vma_under_rcu() handles that, but maybe the comment of vma_start_read()
> needs updating.
Correct. I'll add a comment about this.
>
> > For case #3, we ensure the order in which vma->detached flag and
> > vm_start/vm_end/vm_mm are set and checked. vma gets attached after
> > vm_start/vm_end/vm_mm were set and lock_vma_under_rcu() should check
> > vma->detached before checking vm_start/vm_end/vm_mm. This is required
> > because attaching vma happens without vma write-lock, as opposed to
> > vma detaching, which requires vma write-lock. This patch adds memory
> > barriers inside is_vma_detached() and vma_mark_attached() needed to
> > order reads and writes to vma->detached vs vm_start/vm_end/vm_mm.
> > After these provisions, SLAB_TYPESAFE_BY_RCU is added to vm_area_cachep.
> > This will facilitate vm_area_struct reuse and will minimize the number
> > of call_rcu() calls.
> > Adding a freeptr_t into vm_area_struct (unioned with vm_start/vm_end)
> > could be used to avoids bloating the structure, however currently
> > custom free pointers are not supported in combination with a ctor
> > (see the comment for kmem_cache_args.freeptr_offset).
>
> I think there's nothing fundamental preventing to support that, there was
> just no user of it. We can do it later.
Oh, ok. I can add it back so that we have one user and then when the
mechanism is implemented it can be used for testing. Adding freeptr_t
has no negative effects and will reduce later churn.
>
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -436,6 +436,11 @@ static struct kmem_cache *vm_area_cachep;
> > /* SLAB cache for mm_struct structures (tsk->mm) */
> > static struct kmem_cache *mm_cachep;
> >
> > +static void vm_area_ctor(void *data)
> > +{
> > + vma_lock_init(data);
> > +}
> > +
> > struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
> > {
> > struct vm_area_struct *vma;
> > @@ -462,8 +467,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
> > * orig->shared.rb may be modified concurrently, but the clone
> > * will be reinitialized.
> > */
> > - data_race(memcpy(new, orig, sizeof(*new)));
> > - vma_lock_init(new);
> > + vma_copy(new, orig);
> > INIT_LIST_HEAD(&new->anon_vma_chain);
> > #ifdef CONFIG_PER_VMA_LOCK
> > /* vma is not locked, can't use vma_mark_detached() */
>
> Here we mark it detached but we might have already copied it as attached and
> confused a reader?
Very true. Thanks for catching this one!
>
> I think this will be covered by what you said in reply to willy:
> "vma_copy() will have to also copy vma members individually."
Yes, I think so. vma_copy() will need to copy most but not all
members. vma->detached will be among those not copied.
Thanks!
>
> > @@ -475,32 +479,37 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
> > return new;
> > }
> >
Powered by blists - more mailing lists