[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <514CEB6A.8020300@redhat.com>
Date: Fri, 22 Mar 2013 19:38:18 -0400
From: Rik van Riel <riel@...hat.com>
To: Michel Lespinasse <walken@...gle.com>
CC: Rik van Riel <riel@...riel.com>, torvalds@...ux-foundation.org,
davidlohr.bueso@...com, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, hhuang@...hat.com, jason.low2@...com,
lwoodman@...hat.com, chegu_vinod@...com
Subject: Re: [PATCH 7/7] ipc,sem: fine grained locking for semtimedop
On 03/22/2013 07:01 PM, Michel Lespinasse wrote:
> Sorry for the late reply; I've been swamped and am behind on my upstream mail.
>
> On Wed, Mar 20, 2013 at 12:55 PM, Rik van Riel <riel@...riel.com> wrote:
>> +static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
>> + int nsops)
>> +{
>> + int locknum;
>> + if (nsops == 1 && !sma->complex_count) {
>> + struct sem *sem = sma->sem_base + sops->sem_num;
>> +
>> + /* Lock just the semaphore we are interested in. */
>> + spin_lock(&sem->lock);
>> +
>> + /*
>> + * If sma->complex_count was set while we were spinning,
>> + * we may need to look at things we did not lock here.
>> + */
>> + if (unlikely(sma->complex_count)) {
>> + spin_unlock(&sma->sem_perm.lock);
>
> I believe this should be spin_unlock(&sem->lock) instead ?
You are right. Good catch.
I'll send a one-liner fix for this to Andrew Morton,
since he already applied the patches in -mm.
>> + goto lock_all;
>> + }
>> + locknum = sops->sem_num;
>> + } else {
>> + int i;
>> + /* Lock the sem_array, and all the semaphore locks */
>> + lock_all:
>> + spin_lock(&sma->sem_perm.lock);
>> + for (i = 0; i < sma->sem_nsems; i++) {
>
> Do we have to lock every sem from the array instead of just the ones
> that are being operated on in sops ?
> (I'm not sure either way, as I don't fully understand the queueing of
> complex ops)
We should be able to get away with locking just the ones that are
being operated on. However, that does require we remember which
ones we locked, so we know which ones to unlock again.
I do not know whether that additional complexity is worthwhile,
especially considering that in Davidlohr's profile, the major
caller locking everybody appears to be semctl, which passes no
semops into the kernel but simply operates on all semaphores
at once (SET_ALL).
> If we want to keep the loop as is, then we may at least remove the
> sops argument to sem_lock() since we only care about nsops.
We need to know exactly which semaphore to lock when we
are locking only one. The sops argument is used to figure
out which one.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists