[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150611142139.GB29425@arm.com>
Date: Thu, 11 Jun 2015 15:21:40 +0100
From: Will Deacon <will.deacon@....com>
To: Waiman Long <Waiman.Long@...com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v2 1/2] locking/qrwlock: Fix bug in interrupt handling
code
Hi Waiman,
On Tue, Jun 09, 2015 at 04:19:12PM +0100, Waiman Long wrote:
> The qrwlock is fair in the process context, but becoming unfair when
> in the interrupt context to support use cases like the tasklist_lock.
> However, the unfair code in the interrupt context has problem that
> may cause deadlock.
>
> The fast path increments the reader count. In the interrupt context,
> the reader in the slowpath will wait until the writer release the
> lock. However, if other readers have the lock and the writer is just
> in the waiting mode. It will never get the write lock because the
> that interrupt context reader has increment the count. This will
> cause deadlock.
I'm probably just being thick here, but I'm struggling to understand the
deadlock case.
If a reader enters the slowpath in interrupt context, we spin while
(cnts & _QW_WMASK) == _QW_LOCKED. Consequently, if there is a writer in
the waiting state, that won't hold up the reader and so forward progress
is ensured. When the reader unlocks, the reader count is decremented and
the writer can take the lock.
The only problematic case I can think of is if you had a steady stream of
readers in interrupt context, but that doesn't seem likely (and I don't
think this patch deals with that anyway).
What am I missing?
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists