[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090120193128.GA21481@elte.hu>
Date: Tue, 20 Jan 2009 20:31:28 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Zachary Amsden <zach@...are.com>
Cc: Nick Piggin <npiggin@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"hpa@...or.com" <hpa@...or.com>,
"jeremy@...source.com" <jeremy@...source.com>,
"chrisw@...s-sol.org" <chrisw@...s-sol.org>,
"rusty@...tcorp.com.au" <rusty@...tcorp.com.au>
Subject: Re: lmbench lat_mmap slowdown with CONFIG_PARAVIRT
* Zachary Amsden <zach@...are.com> wrote:
> On Tue, 2009-01-20 at 03:26 -0800, Ingo Molnar wrote:
>
> > Jeremy, any ideas where this slowdown comes from and how it could be
> > fixed?
>
> Well I'm early responding to this thread before reading on, but I looked
> at the generated assembly for some common mm paths and it looked awful.
> The biggest loser was probably having functions to convert pte_t back
> and forth to pteval_t, which makes most potential mask / shift
> optimizations impossible - indeed, because the compiler doesn't even
> understand pte_val(X) = Y is static over the lifetime of the function,
> it often calls these same conversions back and forth several times, and
> because this is often done inside hidden macros, it's not even possible
> to save a cached value in most places.
>
> The bulk of state required to keep this extra conversion around ties up
> a lot of registers and as a result heavily limits potential further
> optimizations.
>
> The code did not look more branchy to me, however, and gcc seemed to do
> a good job with lining up a nice branch structure in the few paths I
> looked at.
i've extended my mmap test with branch execution hw-perfcounter stats:
-----------------------------------------------
| Performance counter stats for './mmap-perf' |
-----------------------------------------------
| |
| x86-defconfig | PARAVIRT=y
|------------------------------------------------------------------
|
| 1311.554526 | 1360.624932 task clock ticks (msecs) +3.74%
| |
| 1 | 1 CPU migrations
| 91 | 79 context switches
| 55945 | 55943 pagefaults
| ............................................
| 3781392474 | 3918777174 CPU cycles +3.63%
| 1957153827 | 2161280486 instructions +10.43%
| 50234816 | 51303520 cache references +2.12%
| 5428258 | 5583728 cache misses +2.86%
|
| 437983499 | 478967061 branches +9.36%
| 32486067 | 32336874 branch-misses -0.46%
| |
| 1314.782469 | 1363.694447 time elapsed (msecs) +3.72%
| |
-----------------------------------
So we execute 9.36% more branches - i.e. very noticeably higher as well.
The CPU predicts them slightly more effectively though, the -0.46% for
branch-misses is well above measurement noise (of ~0.02% for the branch
metric) so it's a systematic effect.
Non-functional 'boring' bloat tends to be easier to predict so it's not
necessarily a real surprise. That also explains why despite +10.43% more
instructions the total cycle count went up by a comparatively smaller
+3.63%.
[ that's 64-bit x86 btw. ]
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists