[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150409182314.GU24151@twins.programming.kicks-ass.net>
Date: Thu, 9 Apr 2015 20:23:14 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <Waiman.Long@...com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, linux-arch@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
xen-devel@...ts.xenproject.org, kvm@...r.kernel.org,
Paolo Bonzini <paolo.bonzini@...il.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Rik van Riel <riel@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
David Vrabel <david.vrabel@...rix.com>,
Oleg Nesterov <oleg@...hat.com>,
Daniel J Blueman <daniel@...ascale.com>,
Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v15 09/15] pvqspinlock: Implement simple paravirt support
for the qspinlock
On Thu, Apr 09, 2015 at 08:13:27PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 06, 2015 at 10:55:44PM -0400, Waiman Long wrote:
> > +#define PV_HB_PER_LINE (SMP_CACHE_BYTES / sizeof(struct pv_hash_bucket))
> > +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node)
> > +{
> > + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits);
> > + struct pv_hash_bucket *hb, *end;
> > +
> > + if (!hash)
> > + hash = 1;
> > +
> > + init_hash = hash;
> > + hb = &pv_lock_hash[hash_align(hash)];
> > + for (;;) {
> > + for (end = hb + PV_HB_PER_LINE; hb < end; hb++) {
> > + if (!cmpxchg(&hb->lock, NULL, lock)) {
> > + WRITE_ONCE(hb->node, node);
> > + /*
> > + * We haven't set the _Q_SLOW_VAL yet. So
> > + * the order of writing doesn't matter.
> > + */
> > + smp_wmb(); /* matches rmb from pv_hash_find */
> > + goto done;
> > + }
> > + }
> > +
> > + hash = lfsr(hash, pv_lock_hash_bits, 0);
>
> Since pv_lock_hash_bits is a variable, you end up running through that
> massive if() forest to find the corresponding tap every single time. It
> cannot compile-time optimize it.
>
> Hence:
> hash = lfsr(hash, pv_taps);
>
> (I don't get the bits argument to the lfsr).
>
> In any case, like I said before, I think we should try a linear probe
> sequence first, the lfsr was over engineering from my side.
>
> > + hb = &pv_lock_hash[hash_align(hash)];
So one thing this does -- and one of the reasons I figured I should
ditch the LFSR instead of fixing it -- is that you end up scanning each
bucket HB_PER_LINE times.
The 'fix' would be to LFSR on cachelines instead of HBs but then you're
stuck with the 0-th cacheline.
> > + BUG_ON(hash == init_hash);
> > + }
> > +
> > +done:
> > + return &hb->lock;
> > +}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists