[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpEHEYKhpf7n6nQuB=s_okV=uQZ37OhWfki+iHgwxUmUHw@mail.gmail.com>
Date: Mon, 13 Jan 2025 13:08:28 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Wei Yang <richard.weiyang@...il.com>
Cc: Mateusz Guzik <mjguzik@...il.com>, akpm@...ux-foundation.org, peterz@...radead.org,
willy@...radead.org, liam.howlett@...cle.com, lorenzo.stoakes@...cle.com,
david.laight.linux@...il.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, oliver.sang@...el.com, mgorman@...hsingularity.net,
david@...hat.com, peterx@...hat.com, oleg@...hat.com, dave@...olabs.net,
paulmck@...nel.org, brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com,
hughd@...gle.com, lokeshgidra@...gle.com, minchan@...gle.com,
jannh@...gle.com, shakeel.butt@...ux.dev, souravpanda@...gle.com,
pasha.tatashin@...een.com, klarasmodin@...il.com, corbet@....net,
linux-doc@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v9 11/17] mm: replace vm_lock and detached flag with a
reference count
On Sun, Jan 12, 2025 at 5:47 PM Wei Yang <richard.weiyang@...il.com> wrote:
>
> On Sat, Jan 11, 2025 at 12:14:47PM -0800, Suren Baghdasaryan wrote:
> >On Sat, Jan 11, 2025 at 3:24 AM Mateusz Guzik <mjguzik@...il.com> wrote:
> >>
> >> On Fri, Jan 10, 2025 at 08:25:58PM -0800, Suren Baghdasaryan wrote:
> >>
> >> So there were quite a few iterations of the patch and I have not been
> >> reading majority of the feedback, so it may be I missed something,
> >> apologies upfront. :)
> >>
>
> Hi, I am new to memory barriers. Hope not bothering.
>
> >> > /*
> >> > * Try to read-lock a vma. The function is allowed to occasionally yield false
> >> > * locked result to avoid performance overhead, in which case we fall back to
> >> > @@ -710,6 +742,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma)
> >> > */
> >> > static inline bool vma_start_read(struct vm_area_struct *vma)
> >> > {
> >> > + int oldcnt;
> >> > +
> >> > /*
> >> > * Check before locking. A race might cause false locked result.
> >> > * We can use READ_ONCE() for the mm_lock_seq here, and don't need
> >> > @@ -720,13 +754,19 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> >> > if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
> >> > return false;
> >> >
> >> > - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
> >> > + /*
> >> > + * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail
> >> > + * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET.
> >> > + */
> >> > + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt,
> >> > + VMA_REF_LIMIT)))
> >> > return false;
> >> >
> >>
> >> Replacing down_read_trylock() with the new routine loses an acquire
> >> fence. That alone is not a problem, but see below.
> >
> >Hmm. I think this acquire fence is actually necessary. We don't want
> >the later vm_lock_seq check to be reordered and happen before we take
> >the refcount. Otherwise this might happen:
> >
> >reader writer
> >if (vm_lock_seq == mm_lock_seq) // check got reordered
> > return false;
> > vm_refcnt += VMA_LOCK_OFFSET
> > vm_lock_seq == mm_lock_seq
> > vm_refcnt -= VMA_LOCK_OFFSET
> >if (!__refcount_inc_not_zero_limited())
> > return false;
> >
> >Both reader's checks will pass and the reader would read-lock a vma
> >that was write-locked.
> >
>
> Here what we plan to do is define __refcount_inc_not_zero_limited() with
> acquire fence, e.g. with atomic_try_cmpxchg_acquire(), right?
Correct. __refcount_inc_not_zero_limited() does not do that in this
version but I'll fix that.
>
> >>
> >> > + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_);
> >> > /*
> >> > - * Overflow might produce false locked result.
> >> > + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result.
> >> > * False unlocked result is impossible because we modify and check
> >> > - * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
> >> > + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq
> >> > * modification invalidates all existing locks.
> >> > *
> >> > * We must use ACQUIRE semantics for the mm_lock_seq so that if we are
> >> > @@ -735,9 +775,10 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> >> > * This pairs with RELEASE semantics in vma_end_write_all().
> >> > */
> >> > if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
>
> One question here is would compiler optimize the read of vm_lock_seq here,
> since we have read it at the beginning?
>
> Or with the acquire fence added above, compiler won't optimize it.
Correct. See "ACQUIRE operations" section in
https://www.kernel.org/doc/Documentation/memory-barriers.txt,
specifically this: "It guarantees that all memory operations after the
ACQUIRE operation will appear to happen after the ACQUIRE operation
with respect to the other components of the system.".
> Or we should use REACE_ONCE(vma->vm_lock_seq) here?
>
> >>
> >> The previous modification of this spot to raw_read_seqcount loses the
> >> acquire fence, making the above comment not line up with the code.
> >
> >Is it? From reading the seqcount code
> >(https://elixir.bootlin.com/linux/v6.13-rc3/source/include/linux/seqlock.h#L211):
> >
> >raw_read_seqcount()
> > seqprop_sequence()
> > __seqprop(s, sequence)
> > __seqprop_sequence()
> > smp_load_acquire()
> >
> >smp_load_acquire() still provides the acquire fence. Am I missing something?
> >
> >>
> >> I don't know if the stock code (with down_read_trylock()) is correct as
> >> is -- looks fine for cursory reading fwiw. However, if it indeed works,
> >> the acquire fence stemming from the lock routine is a mandatory part of
> >> it afaics.
> >>
> >> I think the best way forward is to add a new refcount routine which
> >> ships with an acquire fence.
> >
> >I plan on replacing refcount_t usage here with an atomic since, as
> >Hillf noted, refcount is not designed to be used for locking. And will
> >make sure the down_read_trylock() replacement will provide an acquire
> >fence.
> >
>
> Hmm.. refcount_t is defined with atomic_t. I am lost why replacing refcount_t
> with atomic_t would help.
My point is that refcount_t is not designed for locking, so changing
refcount-related functions and adding fences there would be wrong. So,
I'll move to using more generic atomic_t and will implement the
functionality I need without affecting refcounting functions.
>
> >>
> >> Otherwise I would suggest:
> >> 1. a comment above __refcount_inc_not_zero_limited saying there is an
> >> acq fence issued later
> >> 2. smp_rmb() slapped between that and seq accesses
> >>
> >> If the now removed fence is somehow not needed, I think a comment
> >> explaining it is necessary.
> >>
> >> > @@ -813,36 +856,33 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> >> >
> >> > static inline void vma_assert_locked(struct vm_area_struct *vma)
> >> > {
> >> > - if (!rwsem_is_locked(&vma->vm_lock.lock))
> >> > + if (refcount_read(&vma->vm_refcnt) <= 1)
> >> > vma_assert_write_locked(vma);
> >> > }
> >> >
> >>
> >> This now forces the compiler to emit a load from vm_refcnt even if
> >> vma_assert_write_locked expands to nothing. iow this wants to hide
> >> behind the same stuff as vma_assert_write_locked.
> >
> >True. I guess I'll have to avoid using vma_assert_write_locked() like this:
> >
> >static inline void vma_assert_locked(struct vm_area_struct *vma)
> >{
> > unsigned int mm_lock_seq;
> >
> > VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt) <= 1 &&
> > !__is_vma_write_locked(vma,
> >&mm_lock_seq), vma);
> >}
> >
> >Will make the change.
> >
> >Thanks for the feedback!
>
> --
> Wei Yang
> Help you, Help me
Powered by blists - more mailing lists