lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20220923202822.2667581-16-keescook@chromium.org> Date: Fri, 23 Sep 2022 13:28:21 -0700 From: Kees Cook <keescook@...omium.org> To: Vlastimil Babka <vbabka@...e.cz> Cc: Kees Cook <keescook@...omium.org>, Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org>, Roman Gushchin <roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>, Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>, Vincenzo Frascino <vincenzo.frascino@....com>, linux-mm@...ck.org, kasan-dev@...glegroups.com, "Ruhl, Michael J" <michael.j.ruhl@...el.com>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Nick Desaulniers <ndesaulniers@...gle.com>, Alex Elder <elder@...nel.org>, Josef Bacik <josef@...icpanda.com>, David Sterba <dsterba@...e.com>, Sumit Semwal <sumit.semwal@...aro.org>, Christian König <christian.koenig@....com>, Jesse Brandeburg <jesse.brandeburg@...el.com>, Daniel Micay <danielmicay@...il.com>, Yonghong Song <yhs@...com>, Marco Elver <elver@...gle.com>, Miguel Ojeda <ojeda@...nel.org>, linux-kernel@...r.kernel.org, netdev@...r.kernel.org, linux-btrfs@...r.kernel.org, linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org, linux-fsdevel@...r.kernel.org, intel-wired-lan@...ts.osuosl.org, dev@...nvswitch.org, x86@...nel.org, llvm@...ts.linux.dev, linux-hardening@...r.kernel.org Subject: [PATCH v2 15/16] mm: Make ksize() a reporting-only function With all "silently resizing" callers of ksize() refactored, remove the logic in ksize() that would allow it to be used to effectively change the size of an allocation (bypassing __alloc_size hints, etc). Users wanting this feature need to either use kmalloc_size_roundup() before an allocation, or call krealloc() directly. For kfree_sensitive(), move the unpoisoning logic inline. Replace the open-coded ksize() in __do_krealloc with ksize() now that it doesn't perform unpoisoning. Cc: Christoph Lameter <cl@...ux.com> Cc: Pekka Enberg <penberg@...nel.org> Cc: David Rientjes <rientjes@...gle.com> Cc: Joonsoo Kim <iamjoonsoo.kim@....com> Cc: Andrew Morton <akpm@...ux-foundation.org> Cc: Vlastimil Babka <vbabka@...e.cz> Cc: Roman Gushchin <roman.gushchin@...ux.dev> Cc: Hyeonggon Yoo <42.hyeyoo@...il.com> Cc: Andrey Ryabinin <ryabinin.a.a@...il.com> Cc: Alexander Potapenko <glider@...gle.com> Cc: Andrey Konovalov <andreyknvl@...il.com> Cc: Dmitry Vyukov <dvyukov@...gle.com> Cc: Vincenzo Frascino <vincenzo.frascino@....com> Cc: linux-mm@...ck.org Cc: kasan-dev@...glegroups.com Signed-off-by: Kees Cook <keescook@...omium.org> --- mm/slab_common.c | 38 ++++++++++++++------------------------ 1 file changed, 14 insertions(+), 24 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index d7420cf649f8..60b77bcdc2e3 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1160,13 +1160,8 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags) void *ret; size_t ks; - /* Don't use instrumented ksize to allow precise KASAN poisoning. */ - if (likely(!ZERO_OR_NULL_PTR(p))) { - if (!kasan_check_byte(p)) - return NULL; - ks = kfence_ksize(p) ?: __ksize(p); - } else - ks = 0; + /* How large is the allocation actually? */ + ks = ksize(p); /* If the object still fits, repoison it precisely. */ if (ks >= new_size) { @@ -1232,8 +1227,10 @@ void kfree_sensitive(const void *p) void *mem = (void *)p; ks = ksize(mem); - if (ks) + if (ks) { + kasan_unpoison_range(mem, ks); memzero_explicit(mem, ks); + } kfree(mem); } EXPORT_SYMBOL(kfree_sensitive); @@ -1242,10 +1239,11 @@ EXPORT_SYMBOL(kfree_sensitive); * ksize - get the actual amount of memory allocated for a given object * @objp: Pointer to the object * - * kmalloc may internally round up allocations and return more memory + * kmalloc() may internally round up allocations and return more memory * than requested. ksize() can be used to determine the actual amount of - * memory allocated. The caller may use this additional memory, even though - * a smaller amount of memory was initially specified with the kmalloc call. + * allocated memory. The caller may NOT use this additional memory, unless + * it calls krealloc(). To avoid an alloc/realloc cycle, callers can use + * kmalloc_size_roundup() to find the size of the associated kmalloc bucket. * The caller must guarantee that objp points to a valid object previously * allocated with either kmalloc() or kmem_cache_alloc(). The object * must not be freed during the duration of the call. @@ -1254,13 +1252,11 @@ EXPORT_SYMBOL(kfree_sensitive); */ size_t ksize(const void *objp) { - size_t size; - /* - * We need to first check that the pointer to the object is valid, and - * only then unpoison the memory. The report printed from ksize() is - * more useful, then when it's printed later when the behaviour could - * be undefined due to a potential use-after-free or double-free. + * We need to first check that the pointer to the object is valid. + * The KASAN report printed from ksize() is more useful, then when + * it's printed later when the behaviour could be undefined due to + * a potential use-after-free or double-free. * * We use kasan_check_byte(), which is supported for the hardware * tag-based KASAN mode, unlike kasan_check_read/write(). @@ -1274,13 +1270,7 @@ size_t ksize(const void *objp) if (unlikely(ZERO_OR_NULL_PTR(objp)) || !kasan_check_byte(objp)) return 0; - size = kfence_ksize(objp) ?: __ksize(objp); - /* - * We assume that ksize callers could use whole allocated area, - * so we need to unpoison this area. - */ - kasan_unpoison_range(objp, size); - return size; + return kfence_ksize(objp) ?: __ksize(objp); } EXPORT_SYMBOL(ksize); -- 2.34.1
Powered by blists - more mailing lists