[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5130F886.2070009@redhat.com>
Date: Fri, 01 Mar 2013 13:50:46 -0500
From: Rik van Riel <riel@...hat.com>
To: Davidlohr Bueso <davidlohr.bueso@...com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
"Vinod, Chegu" <chegu_vinod@...com>,
"Low, Jason" <jason.low2@...com>,
linux-tip-commits@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>, aquini@...hat.com,
Michel Lespinasse <walken@...gle.com>,
Ingo Molnar <mingo@...nel.org>,
Larry Woodman <lwoodman@...hat.com>
Subject: Re: [tip:core/locking] x86/smp: Move waiting on contended ticket
lock out of line
On 03/01/2013 01:18 PM, Davidlohr Bueso wrote:
> On Fri, 2013-03-01 at 01:42 -0500, Rik van Riel wrote:
>> On 02/28/2013 06:09 PM, Linus Torvalds wrote:
>>
>>> So I almost think that *everything* there in the semaphore code could
>>> be done under RCU. The actual spinlock doesn't seem to much matter, at
>>> least for semaphores. The semaphore values themselves seem to be
>>> protected by the atomic operations, but I might be wrong about that, I
>>> didn't even check.
>>
>> Checking try_atomic_semop and do_smart_update, it looks like neither
>> is using atomic operations. That part of the semaphore code would
>> still benefit from spinlocks.
>
> Agreed.
If we assume that calls to semctl with more than one semaphore
operator are rare, we could do something smarter here.
We could turn the outer spinlock into an rwlock. If we are
doing a call that modifies the outer structure, or multiple
semops at once, we take the lock exclusively.
If we want to do just one semop, we can take the lock in
shared mode. Then each semaphore inside would have its own
spinlock, and we lock just that one.
Of course, that would just add overhead to the case where
a semaphore block has just one semaphore in it, so I'm not
sure this would be worthwhile at all...
>> The way the code handles a whole batch of semops all at once,
>> potentially to multiple semaphores at once, and with the ability
>> to undo all of the operations, it looks like the spinlock will
>> still need to be per block of semaphores.
>>
>> I guess the code may still benefit from Michel's locking code,
>> after the permission stuff has been moved from under the spinlock.
>
> How about splitting ipc_lock()/ipc_lock_control() in two calls: one to
> obtain the ipc object (rcu_read_lock + idr_find), which can be called
> when performing the permissions and security checks, and another to
> obtain the ipcp->lock [q_]spinlock when necessary.
That is what I am working on now.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists