[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150914135749.GS18489@twins.programming.kicks-ass.net>
Date: Mon, 14 Sep 2015 15:57:49 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <Waiman.Long@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH v6 5/6] locking/pvqspinlock: Allow 1 lock stealing attempt
On Fri, Sep 11, 2015 at 02:37:37PM -0400, Waiman Long wrote:
> +#define queued_spin_trylock(l) pv_queued_spin_trylock_unfair(l)
> +static inline bool pv_queued_spin_trylock_unfair(struct qspinlock *lock)
> +{
> + struct __qspinlock *l = (void *)lock;
> +
> + if (READ_ONCE(l->locked))
> + return 0;
> + /*
> + * Wait a bit here to ensure that an actively spinning vCPU has a fair
> + * chance of getting the lock.
> + */
> + cpu_relax();
> +
> + return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
> +}
> +static inline int pvstat_trylock_unfair(struct qspinlock *lock)
> +{
> + int ret = pv_queued_spin_trylock_unfair(lock);
> +
> + if (ret)
> + pvstat_inc(pvstat_utrylock);
> + return ret;
> +}
> +#undef queued_spin_trylock
> +#define queued_spin_trylock(l) pvstat_trylock_unfair(l)
These aren't actually ever used...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists