[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190417124101.GE4038@hirez.programming.kicks-ass.net>
Date: Wed, 17 Apr 2019 14:41:01 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Davidlohr Bueso <dave@...olabs.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v4 08/16] locking/rwsem: Make rwsem_spin_on_owner()
return owner state
On Sat, Apr 13, 2019 at 01:22:51PM -0400, Waiman Long wrote:
> In the special case that there is no active lock and the handoff bit
> is set, optimistic spinning has to be stopped.
> @@ -500,9 +521,19 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
>
> /*
> * If there is a new owner or the owner is not set, we continue
> - * spinning.
> + * spinning except when here is no active locks and the handoff bit
> + * is set. In this case, we have to stop spinning.
> */
> - return is_rwsem_owner_spinnable(READ_ONCE(sem->owner));
> + owner = READ_ONCE(sem->owner);
> + if (!is_rwsem_owner_spinnable(owner))
> + return OWNER_NONSPINNABLE;
> + if (owner && !is_rwsem_owner_reader(owner))
> + return OWNER_WRITER;
> +
> + count = atomic_long_read(&sem->count);
> + if (RWSEM_COUNT_HANDOFF(count) && !RWSEM_COUNT_LOCKED(count))
> + return OWNER_NONSPINNABLE;
> + return !owner ? OWNER_NULL : OWNER_READER;
> }
So this fixes a straight up bug in the last patch (and thus should be
done before so the bug never exists), and creates unreadable code while
at it.
Also, I think only checking HANDOFF after the loop is wrong; the moment
HANDOFF happens you have to terminate the loop, irrespective of what
@owner does.
Does something like so work?
---
enum owner_state {
OWNER_NULL = 1 << 0,
OWNER_WRITER = 1 << 1,
OWNER_READER = 1 << 2,
OWNER_NONSPINNABLE = 1 << 3,
};
#define OWNER_SPINNABLE (OWNER_NULL | OWNER_WRITER)
static inline enum owner_state rwsem_owner_state(unsigned long owner)
{
if (!owner)
return OWNER_NULL;
if (owner & RWSEM_ANONYMOUSLY_OWNED)
return OWNER_NONSPINNABLE;
if (owner & RWSEM_READER_OWNER)
return OWNER_READER;
return OWNER_WRITER;
}
static noinline enum owner_state rwsem_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *tmp, *owner = READ_ONCE(sem->owner);
enum owner_state state;
rcu_read_lock();
for (;;) {
state = rwsem_owner_state((unsigned long)owner);
if (!(state & OWNER_SPINNABLE))
break;
if (atomic_long_read(&sem->count) & RWSEM_FLAG_HANDOFF) {
state = OWNER_NONSPINNABLE;
break;
}
tmp = READ_ONCE(sem->owner);
if (tmp != owner) {
state = rwsem_owner_state((unsigned long)tmp);
break;
}
/*
* Ensure we emit the owner->on_cpu, dereference _after_
* checking sem->owner still matches owner, if that fails,
* owner might point to free()d memory, if it still matches,
* the rcu_read_lock() ensures the memory stays valid.
*/
barrier();
if (need_resched() || !owner_on_cpu(owner)) {
state = OWNER_NONSPINNABLE;
break;
}
cpu_relax();
}
rcu_read_unlock();
return state;
}
Powered by blists - more mailing lists