lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 16 Sep 2010 18:47:59 +0800 From: Miao Xie <miaox@...fujitsu.com> To: Andi Kleen <andi@...stfloor.org> CC: Andrew Morton <akpm@...ux-foundation.org>, Ingo Molnar <mingo@...e.hu>, "Theodore Ts'o" <tytso@....edu>, Chris Mason <chris.mason@...cle.com>, Linux Kernel <linux-kernel@...r.kernel.org>, Linux Btrfs <linux-btrfs@...r.kernel.org>, Linux Ext4 <linux-ext4@...r.kernel.org> Subject: Re: [PATCH] x86_64/lib: improve the performance of memmove On Thu, 16 Sep 2010 12:11:41 +0200, Andi Kleen wrote: > On Thu, 16 Sep 2010 17:29:32 +0800 > Miao Xie<miaox@...fujitsu.com> wrote: > > > Ok was a very broken patch. Sorry should have really done some more > work on it. Anyways hopefully the corrected version is good for > testing. > > -Andi > title: x86_64/lib: improve the performance of memmove Implement the 64bit memmmove backwards case using string instructions Signed-off-by: Andi Kleen <ak@...ux.intel.com> Signed-off-by: Miao Xie <miaox@...fujitsu.com> --- arch/x86/lib/memcpy_64.S | 29 +++++++++++++++++++++++++++++ arch/x86/lib/memmove_64.c | 8 ++++---- 2 files changed, 33 insertions(+), 4 deletions(-) diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index bcbcd1e..9de5e9a 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -141,3 +141,32 @@ ENDPROC(__memcpy) .byte .Lmemcpy_e - .Lmemcpy_c .byte .Lmemcpy_e - .Lmemcpy_c .previous + +/* + * Copy memory backwards (for memmove) + * rdi target + * rsi source + * rdx count + */ + +ENTRY(memcpy_backwards) + CFI_STARTPROC + std + movq %rdi, %rax + movl %edx, %ecx + addq %rdx, %rdi + addq %rdx, %rsi + leaq -8(%rdi), %rdi + leaq -8(%rsi), %rsi + shrl $3, %ecx + andl $7, %edx + rep movsq + addq $7, %rdi + addq $7, %rsi + movl %edx, %ecx + rep movsb + cld + ret + CFI_ENDPROC +ENDPROC(memcpy_backwards) + diff --git a/arch/x86/lib/memmove_64.c b/arch/x86/lib/memmove_64.c index 0a33909..6774fd8 100644 --- a/arch/x86/lib/memmove_64.c +++ b/arch/x86/lib/memmove_64.c @@ -5,16 +5,16 @@ #include <linux/string.h> #include <linux/module.h> +extern void * asmlinkage memcpy_backwards(void *dst, const void *src, + size_t count); + #undef memmove void *memmove(void *dest, const void *src, size_t count) { if (dest < src) { return memcpy(dest, src, count); } else { - char *p = dest + count; - const char *s = src + count; - while (count--) - *--p = *--s; + return memcpy_backwards(dest, src, count); } return dest; } -- 1.7.0.1 -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists