[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161124114007.GE3092@twins.programming.kicks-ass.net>
Date: Thu, 24 Nov 2016 12:40:07 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Nicolai Hähnle <nhaehnle@...il.com>
Cc: Nicolai Hähnle <Nicolai.Haehnle@....com>,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
Ingo Molnar <mingo@...hat.com>, stable@...r.kernel.org,
Maarten Lankhorst <maarten.lankhorst@...onical.com>
Subject: Re: [PATCH 1/4] locking/ww_mutex: Fix a deadlock affecting ww_mutexes
On Thu, Nov 24, 2016 at 12:26:57PM +0100, Nicolai Hähnle wrote:
> I do believe we can win a bit by keeping the wait list sorted, if we also
> make sure that waiters don't add themselves in the first place if they see
> that a deadlock situation cannot be avoided.
>
> I will probably want to extend struct mutex_waiter with ww_mutex-specific
> fields to facilitate this (i.e. ctx pointer, perhaps stamp as well to reduce
> pointer-chasing). That should be fine since it lives on the stack.
Right, shouldn't be a problem I think.
The only 'problem' I can see with using that is that its possible to mix
ww and !ww waiters through ww_mutex_lock(.ctx = NULL). This makes the
list order somewhat tricky.
Ideally we'd remove that feature, although I see its actually used quite
a bit :/
> In the meantime, I'd appreciate it if patch #1 could be accepted as-is for
> stable updates to <= 4.8. It fixes a real (if rare) bug, and the stampede
> inefficiency isn't a problem in practice at least for GPU applications.
Sorry can't do. We don't do stable patches that don't have anything
upstream.
Powered by blists - more mailing lists