lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 30 Jul 2008 14:02:27 +0200 From: Vitaly Mayatskikh <v.mayatskih@...il.com> To: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Vitaly Mayatskikh <v.mayatskih@...il.com>, linux-kernel@...r.kernel.org, Andi Kleen <andi@...stfloor.org>, Ingo Molnar <mingo@...e.hu> Subject: Re: [PATCH] x86: Optimize tail handling for copy_user Linus Torvalds <torvalds@...ux-foundation.org> writes: > On Mon, 28 Jul 2008, Vitaly Mayatskikh wrote: >> >> Reduce protection faults count in copy_user_handle_tail routine by >> limiting clear length to the end of page as was suggested by Linus. > > No, you did it wrong. Another try. Added direction and clear remainder flags to let handle tail routine know how to optimize tail copying and clearing. BYTES_LEFT_IN_PAGE macro returns PAGE_SIZE, not zero, when the address is well aligned to page. Signed-off-by: Vitaly Mayatskikh <v.mayatskih@...il.com> diff --git a/include/asm-x86/uaccess_64.h b/include/asm-x86/uaccess_64.h index 5cfd295..e0ddedf 100644 --- a/include/asm-x86/uaccess_64.h +++ b/include/asm-x86/uaccess_64.h @@ -1,6 +1,16 @@ #ifndef ASM_X86__UACCESS_64_H #define ASM_X86__UACCESS_64_H +/* Flags for copy_user_handle_tail */ +#define CLEAR_REMAINDER 1 +#define DEST_IS_USERSPACE 2 +#define SOURCE_IS_USERSPACE 4 + +#define BYTES_LEFT_IN_PAGE(ptr) \ + (unsigned)((PAGE_MASK & ((long)(ptr) + PAGE_SIZE)) - (long)(ptr)) + +#ifndef __ASSEMBLY__ + /* * User space memory access functions */ @@ -179,23 +189,26 @@ __copy_to_user_inatomic(void __user *dst, const void *src, unsigned size) } extern long __copy_user_nocache(void *dst, const void __user *src, - unsigned size, int zerorest); + unsigned size, unsigned flags); static inline int __copy_from_user_nocache(void *dst, const void __user *src, unsigned size) { might_sleep(); - return __copy_user_nocache(dst, src, size, 1); + return __copy_user_nocache(dst, src, size, SOURCE_IS_USERSPACE + | CLEAR_REMAINDER); } static inline int __copy_from_user_inatomic_nocache(void *dst, const void __user *src, unsigned size) { - return __copy_user_nocache(dst, src, size, 0); + return __copy_user_nocache(dst, src, size, SOURCE_IS_USERSPACE); } unsigned long -copy_user_handle_tail(char *to, char *from, unsigned len, unsigned zerorest); +copy_user_handle_tail(char *dst, char *src, unsigned remainder, unsigned flags); + +#endif /* __ASSEMBLY__ */ #endif /* ASM_X86__UACCESS_64_H */ -- wbr, Vitaly -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists