[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160521004839.GA28231@linux-uzut.site>
Date: Fri, 20 May 2016 17:48:39 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Manfred Spraul <manfred@...orfullife.com>,
Waiman Long <Waiman.Long@....com>,
Ingo Molnar <mingo@...nel.org>, ggherdovich@...e.com,
Mel Gorman <mgorman@...hsingularity.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Will Deacon <will.deacon@....com>
Subject: Re: sem_lock() vs qspinlocks
On Fri, 20 May 2016, Linus Torvalds wrote:
>Oh, I definitely agree on the stable part, and yes, the "splt things
>up" model should come later if people agree that it's a good thing.
The backporting part is quite nice, yes, but ultimately I think I prefer
Linus' suggestion making things explicit, as opposed to consulting the spinlock
implying barriers. I also hate to have an smp_mb() (particularly for spin_is_locked)
given that we are not optimizing for the common case (regular mutual excl).
As opposed to spin_is_locked(), spin_unlock_wait() is perhaps more tempting
to use for locking correctness. For example, taking a look at nf_conntrack_all_lock(),
it too likes to get smart with spin_unlock_wait() -- also for finer graining purposes.
While not identical to sems, it goes like:
nf_conntrack_all_lock(): nf_conntrack_lock():
spin_lock(B); spin_lock(A);
if (bar) { // false
bar = 1; ...
}
[loop ctrl-barrier]
spin_unlock_wait(A);
foo(); foo();
If the spin_unlock_wait() doesn't yet see the store that makes A visibly locked,
we could end up with both threads in foo(), no?. (Although I'm unsure about that
ctrl barrier and archs could fall into it. The point was to see in-tree examples
of creative thinking with locking).
>Should I take the patch as-is, or should I just wait for a pull
>request from the locking tree? Either is ok by me.
I can verify that this patch fixes the issue.
Thanks,
Davidlohr
Powered by blists - more mailing lists