lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 17 Jul 2022 01:17:40 +0200 (CEST) From: Thomas Gleixner <tglx@...utronix.de> To: LKML <linux-kernel@...r.kernel.org> Cc: x86@...nel.org, Linus Torvalds <torvalds@...ux-foundation.org>, Tim Chen <tim.c.chen@...ux.intel.com>, Josh Poimboeuf <jpoimboe@...nel.org>, Andrew Cooper <Andrew.Cooper3@...rix.com>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, Johannes Wikner <kwikner@...z.ch>, Alyssa Milburn <alyssa.milburn@...ux.intel.com>, Jann Horn <jannh@...gle.com>, "H.J. Lu" <hjl.tools@...il.com>, Joao Moreira <joao.moreira@...el.com>, Joseph Nuzman <joseph.nuzman@...el.com>, Steven Rostedt <rostedt@...dmis.org> Subject: [patch 19/38] x86/module: Provide __module_alloc() Provide a function to allocate from module space with large TLBs. This is required for callthunks as otherwise the ITLB pressure kills performance. Signed-off-by: Thomas Gleixner <tglx@...utronix.de> --- arch/x86/include/asm/module.h | 2 ++ arch/x86/mm/module_alloc.c | 10 ++++++++-- 2 files changed, 10 insertions(+), 2 deletions(-) --- a/arch/x86/include/asm/module.h +++ b/arch/x86/include/asm/module.h @@ -13,4 +13,6 @@ struct mod_arch_specific { #endif }; +extern void *__module_alloc(unsigned long size, unsigned long vmflags); + #endif /* _ASM_X86_MODULE_H */ --- a/arch/x86/mm/module_alloc.c +++ b/arch/x86/mm/module_alloc.c @@ -39,7 +39,7 @@ static unsigned long int get_module_load } #endif -void *module_alloc(unsigned long size) +void *__module_alloc(unsigned long size, unsigned long vmflags) { gfp_t gfp_mask = GFP_KERNEL; void *p; @@ -47,10 +47,11 @@ void *module_alloc(unsigned long size) if (PAGE_ALIGN(size) > MODULES_LEN) return NULL; + vmflags |= VM_FLUSH_RESET_PERMS | VM_DEFER_KMEMLEAK; p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR + get_module_load_offset(), MODULES_END, gfp_mask, PAGE_KERNEL, - VM_FLUSH_RESET_PERMS | VM_DEFER_KMEMLEAK, + vmflags, NUMA_NO_NODE, __builtin_return_address(0)); if (p && (kasan_alloc_module_shadow(p, size, gfp_mask) < 0)) { @@ -60,3 +61,8 @@ void *module_alloc(unsigned long size) return p; } + +void *module_alloc(unsigned long size) +{ + return __module_alloc(size, 0); +}
Powered by blists - more mailing lists