[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/tJ2n1e22YhsZ17@hirez.programming.kicks-ass.net>
Date: Sun, 26 Feb 2023 13:00:26 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: mingo@...hat.com, will@...nel.org, linux-kernel@...r.kernel.org,
boqun.feng@...il.com
Subject: Re: [PATCH 3/6] locking/rwsem: Rework writer wakeup
On Thu, Feb 23, 2023 at 04:38:08PM -0500, Waiman Long wrote:
> > @@ -1143,54 +1138,36 @@ rwsem_down_write_slowpath(struct rw_sema
> > } else {
> > atomic_long_or(RWSEM_FLAG_WAITERS, &sem->count);
> > }
> > + raw_spin_unlock_irq(&sem->wait_lock);
> > /* wait until we successfully acquire the lock */
> > - set_current_state(state);
> > trace_contention_begin(sem, LCB_F_WRITE);
> > for (;;) {
> > - if (rwsem_try_write_lock(sem, &waiter)) {
> > - /* rwsem_try_write_lock() implies ACQUIRE on success */
> > + set_current_state(state);
> > + if (!smp_load_acquire(&waiter.task)) {
> > + /* Matches rwsem_waiter_wake()'s smp_store_release(). */
> > break;
> > }
> > -
> > - raw_spin_unlock_irq(&sem->wait_lock);
> > -
> > - if (signal_pending_state(state, current))
> > - goto out_nolock;
> > -
> > - /*
> > - * After setting the handoff bit and failing to acquire
> > - * the lock, attempt to spin on owner to accelerate lock
> > - * transfer. If the previous owner is a on-cpu writer and it
> > - * has just released the lock, OWNER_NULL will be returned.
> > - * In this case, we attempt to acquire the lock again
> > - * without sleeping.
> > - */
> > - if (waiter.handoff_set) {
> > - enum owner_state owner_state;
> > -
> > - owner_state = rwsem_spin_on_owner(sem);
> > - if (owner_state == OWNER_NULL)
> > - goto trylock_again;
> > + if (signal_pending_state(state, current)) {
> > + raw_spin_lock_irq(&sem->wait_lock);
> > + if (waiter.task)
> > + goto out_nolock;
> > + raw_spin_unlock_irq(&sem->wait_lock);
> > + /* Ordered by sem->wait_lock against rwsem_mark_wake(). */
> > + break;
> > }
> > -
> > schedule_preempt_disabled();
> > lockevent_inc(rwsem_sleep_writer);
> > - set_current_state(state);
> > -trylock_again:
> > - raw_spin_lock_irq(&sem->wait_lock);
> > }
> > __set_current_state(TASK_RUNNING);
> > - raw_spin_unlock_irq(&sem->wait_lock);
> > lockevent_inc(rwsem_wlock);
> > trace_contention_end(sem, 0);
> > return sem;
> > out_nolock:
> > - __set_current_state(TASK_RUNNING);
> > - raw_spin_lock_irq(&sem->wait_lock);
> > rwsem_del_wake_waiter(sem, &waiter, &wake_q);
> > + __set_current_state(TASK_RUNNING);
> > lockevent_inc(rwsem_wlock_fail);
> > trace_contention_end(sem, -EINTR);
> > return ERR_PTR(-EINTR);
>
> I believe it is better to change state inside the wait_lock critical section
> to provide a release barrier for free.
I can't follow... a release for what? Note that the reader slowpath has
this exact form already.
Powered by blists - more mailing lists