[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100105185542.GH6714@linux.vnet.ibm.com>
Date: Tue, 5 Jan 2010 10:55:42 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Christoph Lameter <cl@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Minchan Kim <minchan.kim@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"hugh.dickins" <hugh.dickins@...cali.co.uk>,
Nick Piggin <nickpiggin@...oo.com.au>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault()
On Tue, Jan 05, 2010 at 10:25:43AM -0800, Linus Torvalds wrote:
>
>
> On Tue, 5 Jan 2010, Christoph Lameter wrote:
> >
> > If the critical section protected by the spinlock is small then the
> > delay will keep the cacheline exclusive until we hit the unlock. This
> > is the case here as far as I can tell.
>
> I hope somebody can time it. Because I think the idle reads on all the
> (unsuccessful) spinlocks will kill it.
But on many systems, it does take some time for the idle reads to make
their way to the CPU that just acquired the lock. My (admittedly dated)
experience is that the CPU acquiring the lock has a few bus clocks
before the cache line containing the lock gets snatched away.
> Think of it this way: under heavy contention, you'll see a lot of people
> waiting for the spinlocks and one of them succeeds at writing it, reading
> the line. So you get an O(n^2) bus traffic access pattern. In contrast,
> with an xadd, you get O(n) behavior - everybody does _one_ acquire-for-
> write bus access.
xadd (and xchg) certainly are nicer where they apply!
> Remember: the critical section is small, but since you're contending on
> the spinlock, that doesn't much _help_. The readers are all hitting the
> lock (and you can try to solve the O(n*2) issue with back-off, but quite
> frankly, anybody who does that has basically already lost - I'm personally
> convinced you should never do lock backoff, and instead look at what you
> did wrong at a higher level instead).
Music to my ears! ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists