[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFAvsMsBTBMaK5sHFkLQPrfE0nb401gEb2hmN2rbjza6g@mail.gmail.com>
Date: Mon, 9 Sep 2024 19:09:15 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Jann Horn <jannh@...gle.com>
Cc: Andrii Nakryiko <andrii@...nel.org>, linux-trace-kernel@...r.kernel.org,
peterz@...radead.org, oleg@...hat.com, rostedt@...dmis.org,
mhiramat@...nel.org, bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
jolsa@...nel.org, paulmck@...nel.org, willy@...radead.org,
akpm@...ux-foundation.org, linux-mm@...ck.org, mjguzik@...il.com,
brauner@...nel.org
Subject: Re: [PATCH 1/2] mm: introduce mmap_lock_speculation_{start|end}
On Mon, Sep 9, 2024 at 5:35 AM Jann Horn <jannh@...gle.com> wrote:
>
> On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@...nel.org> wrote:
> > +static inline bool mmap_lock_speculation_end(struct mm_struct *mm, int seq)
> > +{
> > + /* Pairs with RELEASE semantics in inc_mm_lock_seq(). */
> > + return seq == smp_load_acquire(&mm->mm_lock_seq);
> > +}
>
> A load-acquire can't provide "end of locked section" semantics - a
> load-acquire is a one-way barrier, you can basically use it for
> "acquire lock" semantics but not for "release lock" semantics, because
> the CPU will prevent reordering the load with *later* loads but not
> with *earlier* loads. So if you do:
>
> mmap_lock_speculation_start()
> [locked reads go here]
> mmap_lock_speculation_end()
>
> then the CPU is allowed to reorder your instructions like this:
>
> mmap_lock_speculation_start()
> mmap_lock_speculation_end()
> [locked reads go here]
>
> so the lock is broken.
Hi Jann,
Thanks for the review!
Yeah, you are right, we do need an smp_rmb() before we compare
mm->mm_lock_seq with the stored seq.
Otherwise reads might get reordered this way:
CPU1 CPU2
mmap_lock_speculation_start() // seq = mm->mm_lock_seq
reloaded_seq = mm->mm_lock_seq; // reordered read
mmap_write_lock() // inc_mm_lock_seq(mm)
vma->vm_file = ...;
mmap_write_unlock() // inc_mm_lock_seq(mm)
<speculate>
mmap_lock_speculation_end() // return (reloaded_seq == seq)
>
> > static inline void mmap_write_lock(struct mm_struct *mm)
> > {
> > __mmap_lock_trace_start_locking(mm, true);
> > down_write(&mm->mmap_lock);
> > + inc_mm_lock_seq(mm);
> > __mmap_lock_trace_acquire_returned(mm, true, true);
> > }
>
> Similarly, inc_mm_lock_seq(), which does a store-release, can only
> provide "release lock" semantics, not "take lock" semantics, because
> the CPU can reorder it with later stores; for example, this code:
>
> inc_mm_lock_seq()
> [locked stuff goes here]
> inc_mm_lock_seq()
>
> can be reordered into this:
>
> [locked stuff goes here]
> inc_mm_lock_seq()
> inc_mm_lock_seq()
>
> so the lock is broken.
Ugh, yes. We do need smp_wmb() AFTER the inc_mm_lock_seq(). Whenever
we use inc_mm_lock_seq() for "take lock" semantics, it's preceded by a
down_write(&mm->mmap_lock) with implied ACQUIRE ordering. So I thought
we can use it but I realize now that this reordering is still
possible:
CPU1 CPU2
mmap_write_lock()
down_write(&mm->mmap_lock);
vma->vm_file = ...;
mmap_lock_speculation_start() // seq = mm->mm_lock_seq
<speculate>
mmap_lock_speculation_end() // return (mm->mm_lock_seq == seq)
inc_mm_lock_seq(mm);
mmap_write_unlock() // inc_mm_lock_seq(mm)
Is that what you were describing?
Thanks,
Suren.
>
> For "taking a lock" with a memory store, or "dropping a lock" with a
> memory load, you need heavier memory barriers, see
> Documentation/memory-barriers.txt.
Powered by blists - more mailing lists