[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANN689FGoTF=5sbYSE7JyyS9CyyYTLrjHq5B0O23tFuCh550qg@mail.gmail.com>
Date: Fri, 29 Mar 2013 06:08:37 -0700
From: Michel Lespinasse <walken@...gle.com>
To: Rik van Riel <riel@...riel.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Sasha Levin <sasha.levin@...cle.com>,
torvalds@...ux-foundation.org, davidlohr.bueso@...com,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
hhuang@...hat.com, jason.low2@...com, lwoodman@...hat.com,
chegu_vinod@...com, Dave Jones <davej@...hat.com>,
benisty.e@...il.com, Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH v2 -mm -next] ipc,sem: fix lockdep false positive
On Fri, Mar 29, 2013 at 5:07 AM, Rik van Riel <riel@...riel.com> wrote:
> On 03/28/2013 10:50 PM, Michel Lespinasse wrote:
>> On Thu, Mar 28, 2013 at 1:23 PM, Rik van Riel <riel@...riel.com> wrote:
>>> if (unlikely(sma->complex_count)) {
>>> spin_unlock(&sem->lock);
>>> - goto lock_all;
>>> + goto lock_array;
>>> + }
>>> +
>>> + /*
>>> + * Another process is holding the global lock on the
>>> + * sem_array; we cannot enter our critical section,
>>> + * but have to wait for the global lock to be released.
>>> + */
>>> + if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
>>> + spin_unlock(&sem->lock);
>>> + goto again;
>>
>> This is IMO where the spin_unlock_wait(&sma->sem_perm.lock) would
>> belong - right before the goto again.
>
> That is where I had it initially. I may have gotten too clever
> and worked on keeping more accesses read-only. If you want, I
> can move it back here and re-submit the patch :)
I think I would prefer that - I feel having it upper serves little
purpose, as you still need to check it here, so we might as well be
optimistic and check it only here. Or maybe I've missed the benefit of
having it earlier - I just don't see it.
>> Also - I think there is a risk that an endless stream of complex
>> semops could starve a simple semop here, as it would always find the
>> sem_perm.lock to be locked ??? One easy way to guarantee progress
>> would be to goto lock_array instead; however there is then the issue
>> that a complex semop could force an endless stream of following simple
>> semops to take the lock_array path. I'm not sure which of these
>> problems is preferable to have...
>
> If starvation turns out to be an issue, there is an even
> simpler solution:
>
> if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
> spin_unlock(&sem->lock);
> spin_lock(&sma->sem_perm.lock);
> spin_lock(&sem->lock);
> spin_unlock(&sma->sem_perm.lock);
> }
This is kinda nice - certainly nicer than falling back to the lock_array case.
It still makes it possible (though less likely) that each simple semop
taking this path might make the next one take it too, though. So, I'm
not sure how to balance that against the starvation possibility. I'll
leave it up to you then :)
Cheers,
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists