[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17925.29368.942606.691598@gargle.gargle.HOWL>
Date: Sat, 24 Mar 2007 21:49:28 +0300
From: Nikita Danilov <nikita@...sterfs.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Nick Piggin <npiggin@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ravikiran G Thirumalai <kiran@...lex86.org>
Subject: Re: [rfc][patch] queued spinlocks (i386)
Ingo Molnar writes:
>
> * Nikita Danilov <nikita@...sterfs.com> wrote:
>
> > Indeed, this technique is very well known. E.g.,
> > http://citeseer.ist.psu.edu/anderson01sharedmemory.html has a whole
> > section (3. Local-spin Algorithms) on them, citing papers from the
> > 1990 onward.
>
> that is a cool reference! So i'd suggest to do (redo?) the patch based
> on those concepts and that terminology and not use 'queued spinlocks'
There is some old version:
http://namesys.com/pub/misc-patches/unsupported/extra/2004.02.04/p06-locallock.patch
http://namesys.com/pub/misc-patches/unsupported/extra/2004.02.04/p07-locallock-bkl.patch
http://namesys.com/pub/misc-patches/unsupported/extra/2004.02.04/p08-locallock-zone.patch
http://namesys.com/pub/misc-patches/unsupported/extra/2004.02.04/p0b-atomic_dec_and_locallock.patch
http://namesys.com/pub/misc-patches/unsupported/extra/2004.02.04/p0c-locallock-dcache.patch
This version retains original spin-lock interface (i.e., no additional
"queue link" pointer is passed to the locking function). As a result,
lock data structure contains an array of NR_CPU counters, so it's only
suitable for global statically allocated locks.
> that are commonly associated with MS's stuff. And as a result the
> contended case would be optimized some more via local-spin algorithms.
> (which is not a key thing for us, but which would be nice to have
> nevertheless)
>
> Ingo
Nikita.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists