lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 11 Nov 2009 15:21:26 -0800
From:	"H. Peter Anvin" <>
To:	"Ma, Ling" <>
CC:	Ingo Molnar <>, Ingo Molnar <>,
	Thomas Gleixner <>,
	linux-kernel <>
Subject: Re: [PATCH RFC] [X86] performance improvement for memcpy_64.S by
 fast string.

On 11/10/2009 11:57 PM, Ma, Ling wrote:
> Hi Ingo
> This program is for 64bit version, so please use 'cc -o memcpy  memcpy.c -O2 -m64'

I did some measurements with this program; I added power-of-two
measurements from 1-512 bytes, plus some different alignments, and found
some very interesting results:

	memcpy_new is a win for 1024+ bytes, but *also* a win for 2-32
	bytes, where the old code apparently performs appallingly bad.

	memcpy_new loses in the 64-512 byte range, so the 1024
	threshold is probably justified.

	memcpy_new is a win for <= 512 bytes, but a lose for larger
	copies (possibly a win again for 16K+ copies, but those are
	very rare in the Linux kernel.)  Surprise...

	However, the difference is very small.

However, I had overlooked something much more fundamental about your
patch.  On Nehalem, at least *it will never get executed* (except during
very early startup), because we replace the memcpy code with a jmp to
memcpy_c on any CPU which has X86_FEATURE_REP_GOOD, which includes Nehalem.

So the patch is a no-op on Nehalem, and any other modern CPU.

Am I guessing that the perf numbers you posted originally were all from
your user space test program?

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists