[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190417080549.GA4038@hirez.programming.kicks-ass.net>
Date: Wed, 17 Apr 2019 10:05:49 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Davidlohr Bueso <dave@...olabs.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v4 07/16] locking/rwsem: Implement lock handoff to
prevent lock starvation
On Tue, Apr 16, 2019 at 02:16:11PM -0400, Waiman Long wrote:
> >> @@ -608,56 +687,63 @@ __rwsem_down_write_failed_common(struct rw_semaphore *sem, int state)
> >> */
> >> waiter.task = current;
> >> waiter.type = RWSEM_WAITING_FOR_WRITE;
> >> + waiter.timeout = jiffies + RWSEM_WAIT_TIMEOUT;
> >>
> >> raw_spin_lock_irq(&sem->wait_lock);
> >>
> >> /* account for this before adding a new element to the list */
> >> + wstate = list_empty(&sem->wait_list) ? WRITER_FIRST : WRITER_NOT_FIRST;
> >>
> >> list_add_tail(&waiter.list, &sem->wait_list);
> >>
> >> /* we're now waiting on the lock */
> >> + if (wstate == WRITER_NOT_FIRST) {
> >> count = atomic_long_read(&sem->count);
> >>
> >> /*
> >> + * If there were already threads queued before us and:
> >> + * 1) there are no no active locks, wake the front
> >> + * queued process(es) as the handoff bit might be set.
> >> + * 2) there are no active writers and some readers, the lock
> >> + * must be read owned; so we try to wake any read lock
> >> + * waiters that were queued ahead of us.
> >> */
> >> + if (!RWSEM_COUNT_LOCKED(count))
> >> + __rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q);
> >> + else if (!(count & RWSEM_WRITER_MASK) &&
> >> + (count & RWSEM_READER_MASK))
> >> __rwsem_mark_wake(sem, RWSEM_WAKE_READERS, &wake_q);
> > Does the above want to be something like:
> >
> > if (!(count & RWSEM_WRITER_LOCKED)) {
> > __rwsem_mark_wake(sem, (count & RWSEM_READER_MASK) ?
> > RWSEM_WAKE_READERS :
> > RWSEM_WAKE_ANY, &wake_q);
> > }
>
> Yes.
>
> >> + else
> >> + goto wait;
> >>
> >> + /*
> >> + * The wakeup is normally called _after_ the wait_lock
> >> + * is released, but given that we are proactively waking
> >> + * readers we can deal with the wake_q overhead as it is
> >> + * similar to releasing and taking the wait_lock again
> >> + * for attempting rwsem_try_write_lock().
> >> + */
> >> + wake_up_q(&wake_q);
> > Hurmph.. the reason we do wake_up_q() outside of wait_lock is such that
> > those tasks don't bounce on wait_lock. Also, it removes a great deal of
> > hold-time from wait_lock.
> >
> > So I'm not sure I buy your argument here.
> >
>
> Actually, we don't want to release the wait_lock, do wake_up_q() and
> acquire the wait_lock again as the state would have been changed. I
> didn't change the comment on this patch, but will reword it to discuss that.
I don't understand, we've queued ourselves, we're on the list, we're not
first. How would dropping the lock to try and kick waiters before us be
a problem?
Sure, once we re-acquire the lock we have to re-avaluate @wstate to see
if we're first now or not, but we need to do that anyway.
So what is wrong with the below?
--- a/include/linux/sched/wake_q.h
+++ b/include/linux/sched/wake_q.h
@@ -51,6 +51,11 @@ static inline void wake_q_init(struct wa
head->lastp = &head->first;
}
+static inline bool wake_q_empty(struct wake_q_head *head)
+{
+ return head->first == WAKE_Q_TAIL;
+}
+
extern void wake_q_add(struct wake_q_head *head, struct task_struct *task);
extern void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task);
extern void wake_up_q(struct wake_q_head *head);
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -700,25 +700,22 @@ __rwsem_down_write_failed_common(struct
* must be read owned; so we try to wake any read lock
* waiters that were queued ahead of us.
*/
- if (!(count & RWSEM_LOCKED_MASK))
- __rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q);
- else if (!(count & RWSEM_WRITER_MASK) &&
- (count & RWSEM_READER_MASK))
- __rwsem_mark_wake(sem, RWSEM_WAKE_READERS, &wake_q);
- else
+ if (count & RWSEM_WRITER_LOCKED)
goto wait;
- /*
- * The wakeup is normally called _after_ the wait_lock
- * is released, but given that we are proactively waking
- * readers we can deal with the wake_q overhead as it is
- * similar to releasing and taking the wait_lock again
- * for attempting rwsem_try_write_lock().
- */
- wake_up_q(&wake_q);
- /*
- * Reinitialize wake_q after use.
- */
- wake_q_init(&wake_q);
+
+ __rwsem_mark_wake(sem, (count & RWSEM_READER_MASK) ?
+ RWSEM_WAKE_READERS :
+ RWSEM_WAKE_ANY, &wake_q);
+
+ if (!wake_q_empty(&wake_q)) {
+ raw_spin_unlock_irq(&sem->wait_lock);
+ wake_up_q(&wake_q);
+ /* used again, reinit */
+ wake_q_init(&wake_q);
+ raw_spin_lock_irq(&sem->wait_lock);
+ if (rwsem_waiter_is_first(sem, &waiter))
+ wstate = WRITER_FIRST;
+ }
} else {
count = atomic_long_add_return(RWSEM_FLAG_WAITERS, &sem->count);
}
Powered by blists - more mailing lists