[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070323114047.b60f2acc.dada1@cosmosbay.com>
Date: Fri, 23 Mar 2007 11:40:47 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Nick Piggin <npiggin@...e.de>
Cc: Ingo Molnar <mingo@...e.hu>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ravikiran G Thirumalai <kiran@...lex86.org>
Subject: Re: [rfc][patch] queued spinlocks (i386)
On Fri, 23 Mar 2007 11:32:44 +0100
Nick Piggin <npiggin@...e.de> wrote:
> On Fri, Mar 23, 2007 at 11:04:18AM +0100, Ingo Molnar wrote:
> >
> > * Nick Piggin <npiggin@...e.de> wrote:
> >
> > > Implement queued spinlocks for i386. [...]
> >
> > isnt this patented by MS? (which might not worry you SuSE/Novell guys,
> > but it might be a worry for the rest of the world ;-)
>
> Hmm, it looks like they have implemented a system where the spinning
> cpu sleeps on a per-CPU variable rather than the lock itself, and
> the releasing cpu writes to that variable to wake it. They do this
> so that spinners don't continually perform exclusive->shared
> transitions on the lock cacheline. They call these things queued
> spinlocks. They don't seem to be very patent worthy either, but
> maybe it is what you're thinking of?
>
> I'm not as concerned about the contended performance of spinlocks
> for Linux as MS seems to be for windows (they seem to be very proud
> of this lock). Because if it is a big problem then IMO it is a bug.
>
> This was just something I had in mind when the hardware lock
> starvation issue came up, so I thought I should quickly code it up
> and RFC... actually it makes contended performance worse, but I'm
> not too worried about that because I'm happy I was able to implement
> it without increasing data size or number of locked operations.
Sure, but please note that you should rename your patch to :
"Implement queued spinlocks for i486"
:)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists