[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140617200531.GB27242@laptop.dumpdata.com>
Date: Tue, 17 Jun 2014 16:05:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>, Waiman.Long@...com
Cc: Waiman.Long@...com, tglx@...utronix.de, mingo@...nel.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
xen-devel@...ts.xenproject.org, kvm@...r.kernel.org,
paolo.bonzini@...il.com, boris.ostrovsky@...cle.com,
paulmck@...ux.vnet.ibm.com, riel@...hat.com,
torvalds@...ux-foundation.org, raghavendra.kt@...ux.vnet.ibm.com,
david.vrabel@...rix.com, oleg@...hat.com, gleb@...hat.com,
scott.norton@...com, chegu_vinod@...com,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock, however to make
> + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing
> + * API, we must modify it some.
> + *
> + * In particular; where the traditional MCS lock consists of a tail pointer
> + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to
> + * unlock the next pending (next->locked), we compress both these: {tail,
> + * next->locked} into a single u32 value.
> + *
> + * Since a spinlock disables recursion of its own context and there is a limit
> + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can
> + * encode the tail as and index indicating this context and a cpu number.
> + *
> + * We can further change the first spinner to spin on a bit in the lock word
> + * instead of its node; whereby avoiding the need to carry a node from lock to
> + * unlock, and preserving API.
You also made changes (compared to the MCS) in that the unlock path is not
spinning waiting for the successor and that the job of passing the lock
is not done in the unlock path either.
Instead all of that is now done in the path of the lock acquirer logic.
Could you update the comment to say that please?
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists