[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1406565669.25428.2.camel@buesod1.americas.hpqcorp.net>
Date: Mon, 28 Jul 2014 09:41:09 -0700
From: Davidlohr Bueso <davidlohr@...com>
To: Jason Low <jason.low2@...com>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org,
aswin@...com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -tip/master 4/7] locking/mutex: Refactor optimistic
spinning code
On Mon, 2014-07-28 at 09:39 -0700, Jason Low wrote:
> On Mon, 2014-07-28 at 11:08 +0200, Peter Zijlstra wrote:
> > On Sun, Jul 27, 2014 at 10:18:41PM -0700, Davidlohr Bueso wrote:
> > > +static bool mutex_optimistic_spin(struct mutex *lock,
> > > + struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
> > > +{
> >
> >
> > > + /*
> > > + * If we fell out of the spin path because of need_resched(),
> > > + * reschedule now, before we try-lock the mutex. This avoids getting
> > > + * scheduled out right after we obtained the mutex.
> > > + */
> > > + if (need_resched())
> > > + schedule_preempt_disabled();
> > > +
> > > + return false;
> > > +}
> >
> >
> > > + if (mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx)) {
> > > + /* got it, yay! */
> > > + preempt_enable();
> > > + return 0;
> > > }
> > > +
> > > /*
> > > * If we fell out of the spin path because of need_resched(),
> > > * reschedule now, before we try-lock the mutex. This avoids getting
> > > @@ -475,7 +512,7 @@ slowpath:
> > > */
> > > if (need_resched())
> > > schedule_preempt_disabled();
> > > +
> > > spin_lock_mutex(&lock->wait_lock, flags);
> >
> > We now have two if (need_resched) schedule_preempt_disable() instances,
> > was that on purpose?
>
> I think we can delete the extra check in mutex_optimistic_spin(). It is
> sufficient to have it here and it also covers the case where the task
> need_resched() without attempting to spin.
Yes, I need to delete the second check, one is enough.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists