[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100106125625.b02c1b3a.kamezawa.hiroyu@jp.fujitsu.com>
Date: Wed, 6 Jan 2010 12:56:25 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Minchan Kim <minchan.kim@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, cl@...ux-foundation.org,
"hugh.dickins" <hugh.dickins@...cali.co.uk>,
Nick Piggin <nickpiggin@...oo.com.au>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault()
On Tue, 5 Jan 2010 19:27:07 -0800 (PST)
Linus Torvalds <torvalds@...ux-foundation.org> wrote:
>
>
> On Wed, 6 Jan 2010, KAMEZAWA Hiroyuki wrote:
> >
> > My host boots successfully. Here is the result.
>
> Hey, looks good. It does have that 3% trylock overhead:
>
> 3.17% multi-fault-all [kernel] [k] down_read_trylock
>
> but that doesn't seem excessive.
>
> Of course, your other load with MADV_DONTNEED seems to be horrible, and
> has some nasty spinlock issues, but that looks like a separate deal (I
> assume that load is just very hard on the pgtable lock).
>
It's zone->lock, I guess. My test program avoids pgtable lock problem.
> That said, profiles are hard to compare performance with - the main thing
> that matters for performance is not how the profile looks, but how it
> actually performs. So:
>
> > Then, the result is much improved by XADD rwsem.
> >
> > In above profile, rwsem is still there.
> > But page-fault/sec is good. I hope some "big" machine users join to the test.
>
> That "page-fault/sec" number is ultimately the only thing that matters.
>
yes.
> > Here is peformance counter result of DONTNEED test. Counting the number of page
> > faults in 60 sec. So, bigger number of page fault is better.
> >
> > [XADD rwsem]
> > [root@...extal memory]# /root/bin/perf stat -e page-faults,cache-misses --repeat 5 ./multi-fault-all 8
> >
> > Performance counter stats for './multi-fault-all 8' (5 runs):
> >
> > 41950863 page-faults ( +- 1.355% )
> > 502983592 cache-misses ( +- 0.628% )
> >
> > 60.002682206 seconds time elapsed ( +- 0.000% )
> >
> > [my patch]
> > [root@...extal memory]# /root/bin/perf stat -e page-faults,cache-misses --repeat 5 ./multi-fault-all 8
> >
> > Performance counter stats for './multi-fault-all 8' (5 runs):
> >
> > 35835485 page-faults ( +- 0.257% )
> > 511445661 cache-misses ( +- 0.770% )
> >
> > 60.004243198 seconds time elapsed ( +- 0.002% )
> >
> > Ah....xadd-rwsem seems to be faster than my patch ;)
>
> Hey, that sounds great. NOTE! My patch really could be improved. In
> particular, I suspect that on x86-64, we should take advantage of the
> 64-bit counter, and use a different RW_BIAS. That way we're not limited to
> 32k readers, which _could_ otherwise be a problem.
>
> So consider my rwsem patch to be purely preliminary. Now that you've
> tested it, I feel a lot better about it being basically correct, but it
> has room for improvement.
>
I'd like to stop updating my patch and wait and see how this issue goes.
Anyway, test on a big machine is appreciated because I cat test only on
2 sockets host.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists