lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20220919201648.2250764-1-keescook@chromium.org> Date: Mon, 19 Sep 2022 13:16:48 -0700 From: Kees Cook <keescook@...omium.org> To: Matthew Wilcox <willy@...radead.org> Cc: Kees Cook <keescook@...omium.org>, Yu Zhao <yuzhao@...gle.com>, dev@...-flo.net, Andrew Morton <akpm@...ux-foundation.org>, Peter Zijlstra <peterz@...radead.org>, Josh Poimboeuf <jpoimboe@...nel.org>, Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, stable@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>, Al Viro <viro@...iv.linux.org.uk>, linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org Subject: [PATCH v2] x86/uaccess: Avoid check_object_size() in copy_from_user_nmi() The check_object_size() helper under CONFIG_HARDENED_USERCOPY is designed to skip any checks where the length is known at compile time as a reasonable heuristic to avoid "likely known-good" cases. However, it can only do this when the copy_*_user() helpers are, themselves, inline too. Using find_vmap_area() requires taking a spinlock. The check_object_size() helper can call find_vmap_area() when the destination is in vmap memory. If show_regs() is called in interrupt context, it will attempt a call to copy_from_user_nmi(), which may call check_object_size() and then find_vmap_area(). If something in normal context happens to be in the middle of calling find_vmap_area() (with the spinlock held), the interrupt handler will hang forever. The copy_from_user_nmi() call is actually being called with a fixed-size length, so check_object_size() should never have been called in the first place. Given the narrow constraints, just replace the __copy_from_user_inatomic() call with an open-coded version that calls only into the sanitizers and not check_object_size(), followed by a call to raw_copy_from_user(). Reported-by: Yu Zhao <yuzhao@...gle.com> Link: https://lore.kernel.org/all/CAOUHufaPshtKrTWOz7T7QFYUNVGFm0JBjvM700Nhf9qEL9b3EQ@mail.gmail.com Reported-by: dev@...-flo.net Suggested-by: Andrew Morton <akpm@...ux-foundation.org> Cc: Matthew Wilcox <willy@...radead.org> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Josh Poimboeuf <jpoimboe@...nel.org> Cc: Dave Hansen <dave.hansen@...ux.intel.com> Cc: x86@...nel.org Fixes: 0aef499f3172 ("mm/usercopy: Detect vmalloc overruns") Cc: stable@...r.kernel.org Signed-off-by: Kees Cook <keescook@...omium.org> --- v2: drop the call explicitly instead of using inline to do it v1: https://lore.kernel.org/lkml/20220916135953.1320601-1-keescook@chromium.org --- arch/x86/lib/usercopy.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c index ad0139d25401..d2aff9b176cf 100644 --- a/arch/x86/lib/usercopy.c +++ b/arch/x86/lib/usercopy.c @@ -44,7 +44,8 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n) * called from other contexts. */ pagefault_disable(); - ret = __copy_from_user_inatomic(to, from, n); + instrument_copy_from_user(to, from, n); + ret = raw_copy_from_user(to, from, n); pagefault_enable(); return ret; -- 2.34.1
Powered by blists - more mailing lists