[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1001081138260.23727@router.home>
Date: Fri, 8 Jan 2010 11:43:41 -0600 (CST)
From: Christoph Lameter <cl@...ux-foundation.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
cc: Peter Zijlstra <peterz@...radead.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Minchan Kim <minchan.kim@...il.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"hugh.dickins" <hugh.dickins@...cali.co.uk>,
Nick Piggin <nickpiggin@...oo.com.au>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault()
On Fri, 8 Jan 2010, Linus Torvalds wrote:
> That's a huge jump. It's clear that the spinlock-based rwsem's simply
> suck. The speculation gets rid of some additional mmap_sem contention,
> but at least for two sockets it looks like the rwsem implementation was
> the biggest problem by far.
I'd say that the ticket lock sucks for short critical sections vs. a
simple spinlock since it forces the cacheline into shared mode.
> Of course, larger numbers of sockets will likely change the situation, but
> at the same time I do suspect that workloads designed for hundreds of
> cores will need to try to behave better than that benchmark anyway ;)
Can we at least consider a typical standard business server, dual quad
core hyperthreaded with 16 "cpus"? Cacheline contention will increase
significantly there.
> Because let's face it - if your workload does several million page faults
> per second, you're just doing something fundamentally _wrong_.
You may just want to get your app running and its trying to initialize
its memory in parallel on all threads. Nothing wrong with that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists