[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070328065650.GB12508@wotan.suse.de>
Date: Wed, 28 Mar 2007 08:56:50 +0200
From: Nick Piggin <npiggin@...e.de>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: mingo@...e.hu, linux-kernel@...r.kernel.org, kiran@...lex86.org
Subject: Re: [rfc][patch] queued spinlocks (i386)
On Sat, Mar 24, 2007 at 01:41:28PM -0800, Andrew Morton wrote:
> > On Fri, 23 Mar 2007 11:32:44 +0100 Nick Piggin <npiggin@...e.de> wrote:
> >
> > I'm not as concerned about the contended performance of spinlocks
> >
>
> The contended case matters. Back in 2.5.something I screwed up the debug
> version of one of the locks (rwlock, iirc) - it was simply missing a
> cpu_relax(), and some people's benchmarks halved.
Do you have a reference?
rwlocks are a bit funny, because if they are found to be useful (that
is, they get used somewhere), then it indicates there can be situations
with a lot of contention and spinning.
Wheras we usually prefer not to use spinlocks in situations like that.
Not that I'm claiming the contended case doesn't matter, but I think
problems there indicate a bug (and I think rwlocks are almost always
questionable).
Anyway, I'll look at doing some contended case optimisations afterward.
> > This was just something I had in mind when the hardware lock
> > starvation issue came up
>
> It looks like a good way to address the lru_lock starvation/capture
> problem. But I think I'd be more comfortable if we were to introduce it as
> a new lock type, rather than as a reimplementation of the existing
> spin_lock(). Initially, at least.
I'd hate to have a proliferation of lock types though. I think my
queued spinlock addresses a real hardware limitation of some systems.
In situations where contention isn't a problem, then queued locks won't
cause a slowdown. In situations where it is, starvation could also be
a problem.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists