[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <51FA3455.1000607@linux.vnet.ibm.com>
Date: Thu, 01 Aug 2013 15:41:33 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Waiman Long <Waiman.Long@...com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
linux-arch@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Richard Weinberger <richard@....at>,
Catalin Marinas <catalin.marinas@....com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Matt Fleming <matt.fleming@...el.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Akinobu Mita <akinobu.mita@...il.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Michel Lespinasse <walken@...gle.com>,
Andi Kleen <andi@...stfloor.org>,
Rik van Riel <riel@...hat.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
George Spelvin <linux@...izon.com>,
Harvey Harrison <harvey.harrison@...il.com>,
"Chandramouleeswaran, Aswin" <aswin@...com>,
"Norton, Scott J" <scott.norton@...com>
Subject: Re: [PATCH RFC 1/2] qspinlock: Introducing a 4-byte queue spinlock
implementation
On 08/01/2013 03:10 PM, Peter Zijlstra wrote:
> On Wed, Jul 31, 2013 at 10:37:10PM -0400, Waiman Long wrote:
>
> OK, so over-all I rather like the thing. It might be good to include a
> link to some MCS lock description, sadly wikipedia doesn't have an
> article on the concept :/
>
> http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
>
> That seems like nice (short-ish) write-up of the general algorithm.
>
>> +typedef struct qspinlock {
>> + union {
>> + struct {
>> + u8 locked; /* Bit lock */
>> + u8 reserved;
>> + u16 qcode; /* Wait queue code */
>> + };
>> + u32 qlock;
>> + };
>> +} arch_spinlock_t;
>
>> +static __always_inline void queue_spin_unlock(struct qspinlock *lock)
>> +{
>> + barrier();
>> + ACCESS_ONCE(lock->locked) = 0;
>
> Its always good to add comments with barriers..
>
>> + smp_wmb();
>> +}
>
>> +/*
>> + * The queue node structure
>> + */
>> +struct qnode {
>> + struct qnode *next;
>> + u8 wait; /* Waiting flag */
>> + u8 used; /* Used flag */
>> +#ifdef CONFIG_DEBUG_SPINLOCK
>> + u16 cpu_nr; /* CPU number */
>> + void *lock; /* Lock address */
>> +#endif
>> +};
>> +
>> +/*
>> + * The 16-bit wait queue code is divided into the following 2 fields:
>> + * Bits 0-1 : queue node index
>> + * Bits 2-15: cpu number + 1
>> + *
>> + * The current implementation will allow a maximum of (1<<14)-1 = 16383 CPUs.
>
> I haven't yet read far enough to figure out why you need the -1 thing,
> but effectively you're restricted to 15k due to this.
>
It is exactly 16k-1 not 15k
That is because CPU_CODE of 1 to 16k represents cpu 0..16k-1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists