[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151106150149.GV17308@twins.programming.kicks-ass.net>
Date: Fri, 6 Nov 2015 16:01:49 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <Waiman.Long@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH tip/locking/core v9 6/6] locking/pvqspinlock: Queue node
adaptive spinning
On Fri, Oct 30, 2015 at 07:26:37PM -0400, Waiman Long wrote:
> +++ b/kernel/locking/qspinlock_paravirt.h
> @@ -23,6 +23,19 @@
> #define _Q_SLOW_VAL (3U << _Q_LOCKED_OFFSET)
>
> /*
> + * Queue Node Adaptive Spinning
> + *
> + * A queue node vCPU will stop spinning if the vCPU in the previous node is
> + * not running. The one lock stealing attempt allowed at slowpath entry
> + * mitigates the slight slowdown for non-overcommitted guest with this
> + * aggressive wait-early mechanism.
> + *
> + * The status of the previous node will be checked at fixed interval
> + * controlled by PV_PREV_CHECK_MASK.
> + */
> +#define PV_PREV_CHECK_MASK 0xff
> +
> +/*
> * Queue node uses: vcpu_running & vcpu_halted.
> * Queue head uses: vcpu_running & vcpu_hashed.
> */
> @@ -202,6 +215,20 @@ static struct pv_node *pv_unhash(struct qspinlock *lock)
> }
>
> /*
> + * Return true if when it is time to check the previous node which is not
> + * in a running state.
> + */
> +static inline bool
> +pv_wait_early(struct pv_node *prev, int loop)
> +{
> +
> + if ((loop & PV_PREV_CHECK_MASK) != 0)
> + return false;
> +
> + return READ_ONCE(prev->state) != vcpu_running;
> +}
So it appears to me the sole purpose of PV_PREV_CHECK_MASK it to avoid
touching the prev->state cacheline too hard. Yet that is not mentioned
anywhere above.
> +static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
> {
> struct pv_node *pn = (struct pv_node *)node;
> + struct pv_node *pp = (struct pv_node *)prev;
> int waitcnt = 0;
> int loop;
> + bool wait_early;
>
> /* waitcnt processing will be compiled out if !QUEUED_LOCK_STAT */
> for (;; waitcnt++) {
> - for (loop = SPIN_THRESHOLD; loop; loop--) {
> + for (wait_early = false, loop = SPIN_THRESHOLD; loop; loop--) {
> if (READ_ONCE(node->locked))
> return;
> + if (pv_wait_early(pp, loop)) {
> + wait_early = true;
> + break;
> + }
> cpu_relax();
> }
>
So if prev points to another node, it will never see vcpu_running. Was
that fully intended?
FYI, I think I've now seen all patches ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists