[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <239fa361-331a-a7b6-9a0d-a6baa19a5003@gmail.com>
Date: Fri, 16 Dec 2016 14:17:25 +0100
From: Nicolai Hähnle <nhaehnle@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org,
Nicolai Hähnle <Nicolai.Haehnle@....com>,
Ingo Molnar <mingo@...hat.com>,
Maarten Lankhorst <dev@...ankhorst.nl>,
Daniel Vetter <daniel@...ll.ch>,
Chris Wilson <chris@...is-wilson.co.uk>,
dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v2 04/11] locking/ww_mutex: Set use_ww_ctx even when
locking without a context
On 06.12.2016 16:25, Peter Zijlstra wrote:
> On Thu, Dec 01, 2016 at 03:06:47PM +0100, Nicolai Hähnle wrote:
>
>> @@ -640,10 +640,11 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>> struct mutex_waiter waiter;
>> unsigned long flags;
>> bool first = false;
>> - struct ww_mutex *ww;
>> int ret;
>>
>> - if (use_ww_ctx) {
>> + if (use_ww_ctx && ww_ctx) {
>> + struct ww_mutex *ww;
>> +
>> ww = container_of(lock, struct ww_mutex, base);
>> if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
>> return -EALREADY;
>
> So I don't see the point of removing *ww from the function scope, we can
> still compute that container_of() even if !ww_ctx, right? That would
> safe a ton of churn below, adding all those struct ww_mutex declarations
> and container_of() casts.
>
> (and note that the container_of() is a fancy NO-OP because base is the
> first member).
Sorry for taking so long to get back to you.
In my experience, the undefined behavior sanitizer in GCC for userspace
programs complains about merely casting a pointer to the wrong type. I
never went into the standards rabbit hole to figure out the details. It
might be a C++ only thing (ubsan cannot tell the difference otherwise
anyway), but that was the reason for doing the change in this more
complicated way.
Are you sure that this is defined behavior in C? If so, I'd be happy to
go with the version that has less churn.
I'll also get rid of those ww_mutex_lock* wrapper functions.
Thanks,
Nicolai
>
>> @@ -656,8 +657,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>> mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, false)) {
>> /* got the lock, yay! */
>> lock_acquired(&lock->dep_map, ip);
>> - if (use_ww_ctx)
>> + if (use_ww_ctx && ww_ctx) {
>> + struct ww_mutex *ww;
>> +
>> + ww = container_of(lock, struct ww_mutex, base);
>> ww_mutex_set_context_fastpath(ww, ww_ctx);
>> + }
>> preempt_enable();
>> return 0;
>> }
>> @@ -702,7 +707,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>> goto err;
>> }
>>
>> - if (use_ww_ctx && ww_ctx->acquired > 0) {
>> + if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) {
>> ret = __ww_mutex_lock_check_stamp(lock, ww_ctx);
>> if (ret)
>> goto err;
>> @@ -742,8 +747,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>> /* got the lock - cleanup and rejoice! */
>> lock_acquired(&lock->dep_map, ip);
>>
>> - if (use_ww_ctx)
>> + if (use_ww_ctx && ww_ctx) {
>> + struct ww_mutex *ww;
>> +
>> + ww = container_of(lock, struct ww_mutex, base);
>> ww_mutex_set_context_slowpath(ww, ww_ctx);
>> + }
>>
>> spin_unlock_mutex(&lock->wait_lock, flags);
>> preempt_enable();
>
> All that then reverts to:
>
> - if (use_ww_ctx)
> + if (use_ww_ctx && ww_ctx)
>
>
Powered by blists - more mailing lists