[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200806080351.GA31889@willie-the-truck>
Date: Thu, 6 Aug 2020 13:13:46 +0100
From: Will Deacon <will@...nel.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
paulmck <paulmck@...nel.org>,
Nicholas Piggin <npiggin@...il.com>,
Andy Lutomirski <luto@...capital.net>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 2/2] sched: membarrier: cover kthread_use_mm
On Wed, Aug 05, 2020 at 11:22:36AM -0400, Mathieu Desnoyers wrote:
> ----- On Aug 5, 2020, at 6:59 AM, Peter Zijlstra peterz@...radead.org wrote:
> > On Tue, Aug 04, 2020 at 07:01:53PM +0200, peterz@...radead.org wrote:
> >> On Tue, Aug 04, 2020 at 10:59:33AM -0400, Mathieu Desnoyers wrote:
> >> > ----- On Aug 4, 2020, at 10:51 AM, Peter Zijlstra peterz@...radead.org wrote:
> >> > > On Tue, Jul 28, 2020 at 12:00:10PM -0400, Mathieu Desnoyers wrote:
> >> > >> task_lock(tsk);
> >> > >> + /*
> >> > >> + * When a kthread stops operating on an address space, the loop
> >> > >> + * in membarrier_{private,global}_expedited() may not observe
> >> > >> + * that tsk->mm, and not issue an IPI. Membarrier requires a
> >> > >> + * memory barrier after accessing user-space memory, before
> >> > >> + * clearing tsk->mm.
> >> > >> + */
> >> > >> + smp_mb();
> >> > >> sync_mm_rss(mm);
> >> > >> local_irq_disable();
> >> > >
> >> > > Would it make sense to put the smp_mb() inside the IRQ disable region?
> >> >
> >> > I've initially placed it right after task_lock so we could eventually
> >> > have a smp_mb__after_non_raw_spinlock or something with a much better naming,
> >> > which would allow removing the extra barrier when it is implied by the
> >> > spinlock.
> >>
> >> Oh, right, fair enough. I'll go think about if smp_mb__after_spinlock()
> >> will work for mutexes too.
> >>
> >> It basically needs to upgrade atomic*_acquire() to smp_mb(). So that's
> >> all architectures that have their own _acquire() and an actual
> >> smp_mb__after_atomic().
> >>
> >> Which, from the top of my head are only arm64, power and possibly riscv.
> >> And if I then git-grep smp_mb__after_spinlock, all those seem to be
> >> covered.
> >>
> >> But let me do a better audit..
> >
> > All I could find is csky, which, afaict, defines a superfluous
> > smp_mb__after_spinlock.
> >
> > The relevant architectures are indeed power, arm64 and riscv, they all
> > have custom acquire/release and all define smp_mb__after_spinlock()
> > appropriately.
> >
> > Should we rename it to smp_mb__after_acquire() ?
>
> As discussed over IRC, smp_mb__after_atomic_acquire() would be better, because
> load_acquire and spin_lock have different semantic.
Just to clarify here, are you talking about acquire on atomic RMW operations
being different to non-RMW operations, or are you talking about
atomic_read_acquire() being different to smp_load_acquire() (which I don't
think is the case, but wanted to check)?
We need to write this stuff down.
> We could keep a define of smp_mb__after_spinlock to smp_mb__after_atomic_acquire
> to make the transition simpler.
I'm not sure I really see the benefit of the rename, to be honest with you,
especially if smp_mb__after_spinlock() doesn't disappear at the same time.
The only reason you'd use this barrier is because the atomic is hidden away
behind a locking API, otherwise you'd just have used the full-barrier variant
of the atomic op to start with, wouldn't you?
Will
Powered by blists - more mailing lists