[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1364308023.5053.40.camel@laptop>
Date: Tue, 26 Mar 2013 15:27:03 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Michel Lespinasse <walken@...gle.com>
Cc: Rik van Riel <riel@...riel.com>,
Sasha Levin <sasha.levin@...cle.com>,
torvalds@...ux-foundation.org, davidlohr.bueso@...com,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
hhuang@...hat.com, jason.low2@...com, lwoodman@...hat.com,
chegu_vinod@...com, Dave Jones <davej@...hat.com>,
benisty.e@...il.com, Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH -mm -next] ipc,sem: fix lockdep false positive
On Tue, 2013-03-26 at 06:40 -0700, Michel Lespinasse wrote:
> sem_nsems is user provided as the array size in some semget system
> call. It's the size of an ipc semaphore array.
So we're basically adding a random (big) number to preempt_count
(obviously while preemption is disabled), seems rather costly and
undesirable.
> complex semop operations take the array's lock plus every semaphore
> locks; simple semop operations (operating on a single semaphore) only
> take that one semaphore's lock.
Right, standard global/local lock like stuff. Is there a way we can add
a r/o test to the 'local' lock operation and avoid doing the above?
Maybe something like:
void sma_lock(struct sem_array *sma) /* global */
{
int i;
sma->global_locked = 1;
smp_wmb(); /* can we merge with the LOCK ? */
spin_lock(&sma->global_lock);
/* wait for all local locks to go away */
for (i = 0; i < sma->sem_nsems; i++)
spin_unlock_wait(&sem->sem_base[i]->lock);
}
void sma_lock_one(struct sem_array *sma, int nr) /* local */
{
smp_rmb(); /* pairs with wmb in sma_lock() */
if (unlikely(sma->global_locked)) { /* wait for global lock */
while (sma->global_locked)
spin_unlock_wait(&sma->global_lock);
}
spin_lock(&sma->sem_base[nr]->lock);
}
This still has the problem of a non-preemptible section of O(sem_nsems)
(with the avg wait-time on the local lock). Could we make the global
lock a sleeping lock?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists