[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51A32F0E.9000206@canonical.com>
Date: Mon, 27 May 2013 12:01:50 +0200
From: Maarten Lankhorst <maarten.lankhorst@...onical.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
x86@...nel.org, dri-devel@...ts.freedesktop.org,
linaro-mm-sig@...ts.linaro.org, robclark@...il.com,
rostedt@...dmis.org, tglx@...utronix.de, mingo@...e.hu,
linux-media@...r.kernel.org, Dave Airlie <airlied@...hat.com>
Subject: Re: [PATCH v3 2/3] mutex: add support for wound/wait style locks,
v3
Op 27-05-13 10:21, Peter Zijlstra schreef:
> On Wed, May 22, 2013 at 07:24:38PM +0200, Maarten Lankhorst wrote:
>>>> +static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
>>>> + struct ww_class *ww_class)
>>>> +{
>>>> + ctx->task = current;
>>>> + do {
>>>> + ctx->stamp = atomic_long_inc_return(&ww_class->stamp);
>>>> + } while (unlikely(!ctx->stamp));
>>> I suppose we'll figure something out when this becomes a bottleneck. Ideally
>>> we'd do something like:
>>>
>>> ctx->stamp = local_clock();
>>>
>>> but for now we cannot guarantee that's not jiffies, and I suppose that's a tad
>>> too coarse to work for this.
>> This might mess up when 2 cores happen to return exactly the same time, how do you choose a winner in that case?
>> EDIT: Using pointer address like you suggested below is fine with me. ctx pointer would be static enough.
> Right, but for now I suppose the 'global' atomic is ok, if/when we find
> it hurts performance we can revisit. I was just spewing ideas :-)
If accurate timers are available it wouldn't be a bad idea. I fixed up the code to at least support this case should it happen.
For now the source of the stamp is still a single atomic_long.
>>> Also, why is 0 special?
>> Oops, 0 is no longer special.
>>
>> I used to set the samp directly on the lock, so 0 used to mean no ctx set.
> Ah, ok :-)
>
>>>> +static inline int __must_check ww_mutex_trylock_single(struct ww_mutex *lock)
>>>> +{
>>>> + return mutex_trylock(&lock->base);
>>>> +}
>>> trylocks can never deadlock they don't block per definition, I don't see the
>>> point of the _single() thing here.
>> I called it single because they weren't annotated into any ctx. I can drop the _single suffix though,
>> but you'd still need to unlock with unlock_single, or we need to remove that distinction altogether,
>> lose a few lockdep checks and only have a one unlock function.
> Again, early.. monday.. would a trylock, even if successful still need
> the ctx?
No ctx for trylock is supported. You can still do a trylock while holding a context, but the mutex won't be
a part of the context. Normal lockdep rules apply. lib/locking-selftest.c:
context + ww_mutex_lock first, then a trylock:
dotest(ww_test_context_try, SUCCESS, LOCKTYPE_WW);
trylock first, then context + ww_mutex_lock:
dotest(ww_test_try_context, FAILURE, LOCKTYPE_WW);
For now I don't want to add support for a trylock with context, I'm very glad I managed to fix ttm locking
to not require this any more, and it was needed there only because it was a workaround for the locking
being wrong. There was no annotation for the buffer locking it was using, so the real problem wasn't easy to spot.
~Maarten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists