[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51304DDA.40808@redhat.com>
Date: Fri, 01 Mar 2013 01:42:34 -0500
From: Rik van Riel <riel@...hat.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Davidlohr Bueso <davidlohr.bueso@...com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
"Vinod, Chegu" <chegu_vinod@...com>,
"Low, Jason" <jason.low2@...com>,
linux-tip-commits@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>, aquini@...hat.com,
Michel Lespinasse <walken@...gle.com>,
Ingo Molnar <mingo@...nel.org>,
Larry Woodman <lwoodman@...hat.com>
Subject: Re: [tip:core/locking] x86/smp: Move waiting on contended ticket
lock out of line
On 02/28/2013 06:09 PM, Linus Torvalds wrote:
> So I almost think that *everything* there in the semaphore code could
> be done under RCU. The actual spinlock doesn't seem to much matter, at
> least for semaphores. The semaphore values themselves seem to be
> protected by the atomic operations, but I might be wrong about that, I
> didn't even check.
Checking try_atomic_semop and do_smart_update, it looks like neither
is using atomic operations. That part of the semaphore code would
still benefit from spinlocks.
The way the code handles a whole batch of semops all at once,
potentially to multiple semaphores at once, and with the ability
to undo all of the operations, it looks like the spinlock will
still need to be per block of semaphores.
I guess the code may still benefit from Michel's locking code,
after the permission stuff has been moved from under the spinlock.
Two remaining worries are the security_sem_free call and the
non-RCU list_del calls from freeary, called from the SEM_RMID
code. They are probably fine, but worth another pair of eyes...
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists