[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E6EBDA8.2040401@vlnb.net>
Date: Mon, 12 Sep 2011 22:19:20 -0400
From: Vladislav Bolkhovitin <vst@...b.net>
To: Al Viro <viro@...IV.linux.org.uk>
CC: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: Lockdep and rw_semaphores
Al Viro, on 09/10/2011 10:38 PM wrote:
> On Sat, Sep 10, 2011 at 09:34:14PM -0400, Vladislav Bolkhovitin wrote:
>> Hello,
>>
>> Looks like lockdep somehow over-restrictive for rw_semaphores in case when they
>> are taken for read (down_read()) and requires them to follow the same inner-outer
>> rules as for plain locks.
>>
>> For instance, code like:
>>
>> DECLARE_RWSEM(QQ_sem);
>> DECLARE_RWSEM(QQ1_sem);
>>
>> thread1:
>>
>> down_read(&QQ_sem);
>> down_read(&QQ1_sem);
>
>> thread2:
>>
>> down_read(&QQ1_sem);
>> down_read(&QQ_sem);
>
>> Is it by design or just something overlooked? I don't see how reverse order of
>> down_read()'s can lead to any deadlock. Or am I missing anything?
>
> thread1: got QQ
> thread2: got QQ1
> thread3: tries to do down_write() on QQ, gets blocked
> thread4: tries to do down_write() on QQ1, gets blocked
>
> Now we have thread1 that can't get QQ1 once the threads trying to get it
> exclusive get a shot at it. Thread2 is blocked in the same way on QQ.
> And neither is going to release the (shared) lock they are holding, so
> thread3 and thread4 are not going to get anywhere either.
>
> IOW, ordering *is* needed there. Note that for the same reason trying to
> grab the same lock shared twice is a deadlock:
>
> A: already holds X shared
> B: blocks trying to grab it exclusive
> A: tries to grab it shared again and gets stuck, since there is a pending
> down_write() and we are guaranteed that writer will get served as soon
> as all current readers are through; no new readers are allowed to starve it.
Sure, if nested write locking is involved, lockdep should loudly complain, but on
the first nested write, not read, because what's the point to complain on nested
reads? I may not have nested write locking at all (and don't have).
Plus, the deadlock scenario lockdep described for me
CPU0 CPU1
---- ----
lock(QQ1_sem);
lock(QQ_sem);
lock(QQ1_sem);
lock(QQ_sem);
*** DEADLOCK ***
is simply wrong. Such deadlock can never happen, correct?
Thanks,
Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists