[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120118111404.GA12152@elte.hu>
Date: Wed, 18 Jan 2012 12:14:04 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Jan Beulich <JBeulich@...e.com>
Cc: tglx@...utronix.de, Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, hpa@...or.com
Subject: Re: [PATCH] x86-64: fix memset() to support sizes of 4Gb and above
* Jan Beulich <JBeulich@...e.com> wrote:
> >>> On 06.01.12 at 12:05, Ingo Molnar <mingo@...e.hu> wrote:
> >> * Jan Beulich <JBeulich@...e.com> wrote:
> > Would be nice to add support for arch/x86/lib/memset_64.S as
> > well, and look at the before/after performance of it.
>
> Got this done, will post the patch soon. However, ...
>
> > For example the kernel's memcpy routine in slightly faster than
> > glibc's:
>
> This is an illusion [...]
Oh ...
> [...] - since the kernel's memcpy_64.S also defines a "memcpy"
> (not just "__memcpy"), the static linker resolves the
> reference from mem-memcpy.c against this one. Apparent
> performance differences rather point at effects like
> (guessing) branch prediction (using the second vs the first
> entry of routines[]). After fixing this, on my Westmere box
> glibc's is quite a bit slower than the unrolled kernel variant
> (4% fewer instructions, but about 15% more cycles).
Cool and thanks for looking into this. Will wait for your
patch(es).
> > If such measurements all suggests equal or better
> > performance, and if there's no erratum in current CPUs that
> > would make 4G string copies dangerous [which your research
> > suggests should be fine], i have no principal objection
> > against this patch.
>
> If I interpreted things correctly, there's a tiny win with the
> changes (also for not-yet-posted memcpy equivalent):
Nice. That would be the expectation from the reduction in the
instruction count. Seems to be slighly above the noise threshold
of the measurement.
Note that sometimes there's variance between different perf
bench runs larger than the reported standard deviation. This can
be seen from the three repeated --repeat 1000 runs you did.
I believe this effect is due to memory layout artifacts - found
no good way so far to move that kind of variance inside the perf
stat --repeat runs.
Maybe we could allocate a random amount of memory in user-space,
in the [0..1MB] range, before doing a repeat run (and freeing it
after an iteration), and perhaps dup() stdout randomly, to fuzz
the kmalloc and page allocation layout patterns?
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists