[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <457267869.1379.1596718672867.JavaMail.zimbra@efficios.com>
Date: Thu, 6 Aug 2020 08:57:52 -0400 (EDT)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Will Deacon <will@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
paulmck <paulmck@...nel.org>,
Nicholas Piggin <npiggin@...il.com>,
Andy Lutomirski <luto@...capital.net>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 2/2] sched: membarrier: cover kthread_use_mm
----- On Aug 6, 2020, at 8:13 AM, Will Deacon will@...nel.org wrote:
> On Wed, Aug 05, 2020 at 11:22:36AM -0400, Mathieu Desnoyers wrote:
>> ----- On Aug 5, 2020, at 6:59 AM, Peter Zijlstra peterz@...radead.org wrote:
>> > On Tue, Aug 04, 2020 at 07:01:53PM +0200, peterz@...radead.org wrote:
>> >> On Tue, Aug 04, 2020 at 10:59:33AM -0400, Mathieu Desnoyers wrote:
>> >> > ----- On Aug 4, 2020, at 10:51 AM, Peter Zijlstra peterz@...radead.org wrote:
>> >> > > On Tue, Jul 28, 2020 at 12:00:10PM -0400, Mathieu Desnoyers wrote:
>> >> > >> task_lock(tsk);
>> >> > >> + /*
>> >> > >> + * When a kthread stops operating on an address space, the loop
>> >> > >> + * in membarrier_{private,global}_expedited() may not observe
>> >> > >> + * that tsk->mm, and not issue an IPI. Membarrier requires a
>> >> > >> + * memory barrier after accessing user-space memory, before
>> >> > >> + * clearing tsk->mm.
>> >> > >> + */
>> >> > >> + smp_mb();
>> >> > >> sync_mm_rss(mm);
>> >> > >> local_irq_disable();
>> >> > >
>> >> > > Would it make sense to put the smp_mb() inside the IRQ disable region?
>> >> >
>> >> > I've initially placed it right after task_lock so we could eventually
>> >> > have a smp_mb__after_non_raw_spinlock or something with a much better naming,
>> >> > which would allow removing the extra barrier when it is implied by the
>> >> > spinlock.
>> >>
>> >> Oh, right, fair enough. I'll go think about if smp_mb__after_spinlock()
>> >> will work for mutexes too.
>> >>
>> >> It basically needs to upgrade atomic*_acquire() to smp_mb(). So that's
>> >> all architectures that have their own _acquire() and an actual
>> >> smp_mb__after_atomic().
>> >>
>> >> Which, from the top of my head are only arm64, power and possibly riscv.
>> >> And if I then git-grep smp_mb__after_spinlock, all those seem to be
>> >> covered.
>> >>
>> >> But let me do a better audit..
>> >
>> > All I could find is csky, which, afaict, defines a superfluous
>> > smp_mb__after_spinlock.
>> >
>> > The relevant architectures are indeed power, arm64 and riscv, they all
>> > have custom acquire/release and all define smp_mb__after_spinlock()
>> > appropriately.
>> >
>> > Should we rename it to smp_mb__after_acquire() ?
>>
>> As discussed over IRC, smp_mb__after_atomic_acquire() would be better, because
>> load_acquire and spin_lock have different semantic.
>
> Just to clarify here, are you talking about acquire on atomic RMW operations
> being different to non-RMW operations, or are you talking about
> atomic_read_acquire() being different to smp_load_acquire() (which I don't
> think is the case, but wanted to check)?
I was referring to the two following APIs:
- spin_lock()
- smp_load_acquire()
on x86, spin_lock() happens to be implemented with an atomic instruction, which
implies a full memory barrier. However, its smp_load_acquire() does not provide
a full memory barrier. Therefore, if we implement a smp_mb__after_acquire() as
proposed by Peter, we could expect it to cover both APIs, which is tricky to do
efficiently without adding a superfluous barrier.
Hence the discussion about make its naming more specific, so it does not cover
smp_load_acquire.
>
> We need to write this stuff down.
>
>> We could keep a define of smp_mb__after_spinlock to smp_mb__after_atomic_acquire
>> to make the transition simpler.
>
> I'm not sure I really see the benefit of the rename, to be honest with you,
> especially if smp_mb__after_spinlock() doesn't disappear at the same time.
> The only reason you'd use this barrier is because the atomic is hidden away
> behind a locking API, otherwise you'd just have used the full-barrier variant
> of the atomic op to start with, wouldn't you?
Good point.
As long as we can state that smp_mb__after_spinlock applies both to raw_spinlock
and non-raw spinlock (which AFAIU are mutexes on RT), I think it would suffice for
our immediate needs.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists