[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <guxwr4wzs5yt5ajrpwwpjdv6lbjf4dhgmjh7edrbc7lvevnh2o@joquw2jf6s4i>
Date: Thu, 15 Aug 2024 20:24:15 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Andrii Nakryiko <andrii.nakryiko@...il.com>,
Andrii Nakryiko <andrii@...nel.org>, linux-trace-kernel@...r.kernel.org, peterz@...radead.org,
oleg@...hat.com, rostedt@...dmis.org, mhiramat@...nel.org, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, jolsa@...nel.org, paulmck@...nel.org, willy@...radead.org,
akpm@...ux-foundation.org, linux-mm@...ck.org, Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH RFC v3 13/13] uprobes: add speculative lockless VMA to
inode resolution
On Thu, Aug 15, 2024 at 10:45:45AM -0700, Suren Baghdasaryan wrote:
> >From all the above, my understanding of your objection is that
> checking mmap_lock during our speculation is too coarse-grained and
> you would prefer to use the VMA seq counter to check that the VMA we
> are working on is unchanged. I agree, that would be ideal. I had a
> quick chat with Jann about this and the conclusion we came to is that
> we would need to add an additional smp_wmb() barrier inside
> vma_start_write() and a smp_rmb() in the speculation code:
>
> static inline void vma_start_write(struct vm_area_struct *vma)
> {
> int mm_lock_seq;
>
> if (__is_vma_write_locked(vma, &mm_lock_seq))
> return;
>
> down_write(&vma->vm_lock->lock);
> /*
> * We should use WRITE_ONCE() here because we can have concurrent reads
> * from the early lockless pessimistic check in vma_start_read().
> * We don't really care about the correctness of that early check, but
> * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
> */
> WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
> + smp_wmb();
> up_write(&vma->vm_lock->lock);
> }
>
> Note: up_write(&vma->vm_lock->lock) in the vma_start_write() is not
> enough because it's one-way permeable (it's a "RELEASE operation") and
> later vma->vm_file store (or any other VMA modification) can move
> before our vma->vm_lock_seq store.
>
> This makes vma_start_write() heavier but again, it's write-locking, so
> should not be considered a fast path.
> With this change we can use the code suggested by Andrii in
> https://lore.kernel.org/all/CAEf4BzZeLg0WsYw2M7KFy0+APrPaPVBY7FbawB9vjcA2+6k69Q@mail.gmail.com/
> with an additional smp_rmb():
>
> rcu_read_lock()
> vma = find_vma(...)
> if (!vma) /* bail */
>
> vm_lock_seq = smp_load_acquire(&vma->vm_lock_seq);
> mm_lock_seq = smp_load_acquire(&vma->mm->mm_lock_seq);
> /* I think vm_lock has to be acquired first to avoid the race */
> if (mm_lock_seq == vm_lock_seq)
> /* bail, vma is write-locked */
> ... perform uprobe lookup logic based on vma->vm_file->f_inode ...
> smp_rmb();
> if (vma->vm_lock_seq != vm_lock_seq)
> /* bail, VMA might have changed */
>
> The smp_rmb() is needed so that vma->vm_lock_seq load does not get
> reordered and moved up before speculation.
>
> I'm CC'ing Jann since he understands memory barriers way better than
> me and will keep me honest.
>
So I briefly noted that maybe down_read on the vma would do it, but per
Andrii parallel lookups on the same vma on multiple CPUs are expected,
which whacks that out.
When I initially mentioned per-vma sequence counters I blindly assumed
they worked the usual way. I don't believe any fancy rework here is
warranted especially given that the per-mm counter thing is expected to
have other uses.
However, chances are decent this can still be worked out with per-vma
granualarity all while avoiding any stores on lookup and without
invasive (or complicated) changes. The lockless uprobe code claims to
guarantee only false negatives and the miss always falls back to the
mmap semaphore lookup. There may be something here, I'm going to chew on
it.
That said, thank you both for writeup so far.
Powered by blists - more mailing lists