[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <4F218584020000780006F422@nat28.tlf.novell.com>
Date: Thu, 26 Jan 2012 15:55:32 +0000
From: "Jan Beulich" <JBeulich@...e.com>
To: <mingo@...e.hu>, <tglx@...utronix.de>, <hpa@...or.com>
Cc: <linux-kernel@...r.kernel.org>
Subject: [PATCH] x86-64: handle byte-wise tail copying in memcpy()
without a loop
While hard to measure, reducing the number of possibly/likely
mis-predicted branches can generally be expected to be slightly better.
Other than apparent at the first glance, this also doesn't grow the
function size (the alignment gap to the next function just gets
smaller).
Signed-off-by: Jan Beulich <jbeulich@...e.com>
---
arch/x86/lib/memcpy_64.S | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- 3.3-rc1/arch/x86/lib/memcpy_64.S
+++ 3.3-rc1-x86_64-memcpy-tail/arch/x86/lib/memcpy_64.S
@@ -169,18 +169,19 @@ ENTRY(memcpy)
retq
.p2align 4
.Lless_3bytes:
- cmpl $0, %edx
- je .Lend
+ subl $1, %edx
+ jb .Lend
/*
* Move data from 1 bytes to 3 bytes.
*/
-.Lloop_1:
- movb (%rsi), %r8b
- movb %r8b, (%rdi)
- incq %rdi
- incq %rsi
- decl %edx
- jnz .Lloop_1
+ movzbl (%rsi), %ecx
+ jz .Lstore_1byte
+ movzbq 1(%rsi), %r8
+ movzbq (%rsi, %rdx), %r9
+ movb %r8b, 1(%rdi)
+ movb %r9b, (%rdi, %rdx)
+.Lstore_1byte:
+ movb %cl, (%rdi)
.Lend:
retq
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists