[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <310077ed-6f3f-41fe-afcf-36500a9408ec@lucifer.local>
Date: Tue, 23 May 2023 08:42:21 +0100
From: Lorenzo Stoakes <lstoakes@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Kees Cook <keescook@...omium.org>, linux-mm@...ck.org,
linux-hardening@...r.kernel.org, patches@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slab: remove HAVE_HARDENED_USERCOPY_ALLOCATOR
On Tue, May 23, 2023 at 09:31:36AM +0200, Vlastimil Babka wrote:
> With SLOB removed, both remaining allocators support hardened usercopy,
> so remove the config and associated #ifdef.
>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/Kconfig | 2 --
> mm/slab.h | 9 ---------
> security/Kconfig | 8 --------
> 3 files changed, 19 deletions(-)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 7672a22647b4..041f0da42f2b 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -221,7 +221,6 @@ choice
> config SLAB
> bool "SLAB"
> depends on !PREEMPT_RT
> - select HAVE_HARDENED_USERCOPY_ALLOCATOR
> help
> The regular slab allocator that is established and known to work
> well in all environments. It organizes cache hot objects in
> @@ -229,7 +228,6 @@ config SLAB
>
> config SLUB
> bool "SLUB (Unqueued Allocator)"
> - select HAVE_HARDENED_USERCOPY_ALLOCATOR
> help
> SLUB is a slab allocator that minimizes cache line usage
> instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slab.h b/mm/slab.h
> index f01ac256a8f5..695ef96b4b5b 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -832,17 +832,8 @@ struct kmem_obj_info {
> void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab);
> #endif
>
> -#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> void __check_heap_object(const void *ptr, unsigned long n,
> const struct slab *slab, bool to_user);
> -#else
> -static inline
> -void __check_heap_object(const void *ptr, unsigned long n,
> - const struct slab *slab, bool to_user)
> -{
> -}
> -#endif
Hm, this is still defined in slab.c/slub.c and invoked in usercopy.c, do we
not want the prototype? Perhaps replacing with #ifdef
CONFIG_HARDENED_USERCOPY instead? I may be missing something here :)
> -
> #ifdef CONFIG_SLUB_DEBUG
> void skip_orig_size_check(struct kmem_cache *s, const void *object);
> #endif
> diff --git a/security/Kconfig b/security/Kconfig
> index 97abeb9b9a19..52c9af08ad35 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -127,16 +127,8 @@ config LSM_MMAP_MIN_ADDR
> this low address space will need the permission specific to the
> systems running LSM.
>
> -config HAVE_HARDENED_USERCOPY_ALLOCATOR
> - bool
> - help
> - The heap allocator implements __check_heap_object() for
> - validating memory ranges against heap object sizes in
> - support of CONFIG_HARDENED_USERCOPY.
> -
> config HARDENED_USERCOPY
> bool "Harden memory copies between kernel and userspace"
> - depends on HAVE_HARDENED_USERCOPY_ALLOCATOR
> imply STRICT_DEVMEM
> help
> This option checks for obviously wrong memory regions when
> --
> 2.40.1
>
Powered by blists - more mailing lists