[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4680D162.9050603@yahoo.com.au>
Date: Tue, 26 Jun 2007 18:42:10 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Eric Dumazet <dada1@...mosbay.com>,
Chuck Ebbert <cebbert@...hat.com>, Ingo Molnar <mingo@...e.hu>,
Jarek Poplawski <jarkao2@...pl>,
Miklos Szeredi <miklos@...redi.hu>, chris@...ee.ca,
linux-kernel@...r.kernel.org, tglx@...utronix.de,
akpm@...ux-foundation.org
Subject: Re: [BUG] long freezes on thinkpad t60
Linus Torvalds wrote:
>
> On Thu, 21 Jun 2007, Eric Dumazet wrote:
>
>>This reminds me Nick's proposal of 'queued spinlocks' 3 months ago
>>
>>Maybe this should be re-considered ? (unlock is still a non atomic op,
>>so we dont pay the serializing cost twice)
>
>
> No. The point is simple:
>
> IF YOU NEED THIS, YOU ARE DOING SOMETHING WRONG!
>
> I don't understand why this is even controversial. Especially since we
> have a patch for the problem that proves my point: the _proper_ way to fix
> things is to just not do the bad thing, instead of trying to allow the bad
> behaviour and try to handle it.
>
> Things like queued spinlocks just make excuses for bad code.
>
> We don't do nesting locking either, for exactly the same reason. Are
> nesting locks "easier"? Absolutely. They are also almost always a sign of
> a *bug*. So making spinlocks and/or mutexes nest by default is just a way
> to encourage bad programming!
Hmm, not that I have a strong opinion one way or the other, but I
don't know that they would encourage bad code. They are not going to
reduce latency under a locked section, but will improve determinism
in the contended case.
They should also improve performance in heavily contended case due to
the nature of how they spin, but I know that's not something you want
to hear about. And theoretically there should be no reason why xadd is
any slower than dec and look at the status flags, should there? I never
implementedit in optimised assembly to test though...
Some hardware seems to have no idea of fair cacheline scheduling, and
especially when there are more than 2 CPUs contending for the
cacheline, there can be large starvations.
And actually some times we have code that really wants to drop the
lock and queue behind other contenders. Most of the lockbreak stuff
for example.
Suppose we could have a completely fair spinlock primitive that has
*virtually* no downsides over the unfair version, you'd take the fair
one, right?
Not that I'm saying they'd ever be a good solution to bad code, but I
do think fairness is better than none, all else being equal.
>>extract :
>>
>>Implement queued spinlocks for i386. This shouldn't increase the size of
>>the spinlock structure, while still able to handle 2^16 CPUs.
>
>
> Umm. i386 spinlocks could and should be *one*byte*.
>
> In fact, I don't even know why they are wasting four bytes right now: the
> fact that somebody made them an "int" just wastes memory. All the actual
> code uses "decb", so it's not even a question of safety. I wonder why we
> have that 32-bit thing and the ugly casts.
Anyway, I think the important point is that they can remain within 4
bytes, which is obviously the most important boundary (they could be
2 bytes on i386).
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists