[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181129221714.GF11632@hirez.programming.kicks-ass.net>
Date: Thu, 29 Nov 2018 23:17:14 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Yongji Xie <elohimes@...il.com>, mingo@...hat.com,
will.deacon@....com, linux-kernel@...r.kernel.org,
xieyongji@...du.com, zhangyu31@...du.com, liuqi16@...du.com,
yuanlinsi01@...du.com, nixun@...du.com, lilin24@...du.com,
longman@...hat.com, andrea.parri@...rulasolutions.com
Subject: Re: [RFC] locking/rwsem: Avoid issuing wakeup before setting the
reader waiter to nil
On Thu, Nov 29, 2018 at 01:34:21PM -0800, Davidlohr Bueso wrote:
> I messed up something such that waiman was not in the thread. Ccing.
>
> > On Thu, 29 Nov 2018, Waiman Long wrote:
> >
> > > That can be costly for x86 which will now have 2 locked instructions.
> >
> > Yeah, and when used as an actual queue we should really start to notice.
> > Some users just have a single task in the wake_q because avoiding the cost
> > of wake_up_process() with locks held is significant.
> >
> > How about instead of adding the barrier before the cmpxchg, we do it
> > in the failed branch, right before we return. This is the uncommon
> > path.
> >
> > Thanks,
> > Davidlohr
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 091e089063be..0d844a18a9dc 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -408,8 +408,14 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
> > * This cmpxchg() executes a full barrier, which pairs with the full
> > * barrier executed by the wakeup in wake_up_q().
> > */
> > - if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL))
> > + if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL)) {
> > + /*
> > + * Ensure, that when the cmpxchg() fails, the corresponding
> > + * wake_up_q() will observe our prior state.
> > + */
> > + smp_mb__after_atomic();
> > return;
> > + }
So wake_up_q() does:
wake_up_q():
node->next = NULL;
/* implied smp_mb */
wake_up_process();
So per the cross your variables 'rule', this side then should do:
wake_q_add():
/* wake_cond = true */
smp_mb()
cmpxchg_relaxed(&node->next, ...);
So that the ordering pivots around node->next.
Either we see NULL and win the cmpxchg (in which case we'll do the
wakeup later) or, when we fail the cmpxchg, we must observe what came
before the failure.
If it wasn't so damn late, I'd try and write a litmus test for this,
because now I'm starting to get confused -- also probably because it's
late.
In any case, I think you patch is 'wrong' because it puts the barrier on
the wrong side of the cmpxchg() (after, as opposed to before).
Powered by blists - more mailing lists