[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1438628982.17146.20.camel@stgolabs.net>
Date: Mon, 03 Aug 2015 12:09:42 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <Waiman.Long@...com>, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v4 1/7] locking/pvqspinlock: Unconditional PV kick with
_Q_SLOW_VAL
On Mon, 2015-08-03 at 20:37 +0200, Peter Zijlstra wrote:
> OK, so there's no 'fix'? The patch claims we can loose a wakeup and I
> just don't see how that is true.
Taking another look, I think you could hit something like this:
CPU0 (lock): CPU1 (unlock):
pv_wait_head __pv_queued_spin_unlock
<load ->state> [bogus ->state != halted]
<spin> smp_store_release(&l->locked, 0);
WRITE_ONCE(pn->state, vcpu_halted);
pv_wait(&l->locked, _Q_SLOW_VAL); if (->state == vcpu_halted)
pv_kick(node->cpu); <-- missing wakeup, never called
So basically you can miss a wakeup if node->state load is done while the
locking thread is spinning and hasn't gotten a chance to update the
state to halted. That would also imply that it occurs right when the
threshold limit is about to be reached.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists