[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c4424700-3502-90d1-b73a-2d5b6f7256c8@suse.cz>
Date: Tue, 4 Aug 2020 19:18:50 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Kees Cook <keescook@...omium.org>, akpm@...ux-foundation.org
Cc: Roman Gushchin <guro@...com>, Christoph Lameter <cl@...ux.com>,
Alexander Popov <alex.popov@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>, vinmenon@...eaurora.org,
Matthew Garrett <mjg59@...gle.com>,
Jann Horn <jannh@...gle.com>,
Vijayanand Jitta <vjitta@...eaurora.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] slab: Add naive detection of double free
On 6/25/20 11:55 PM, Kees Cook wrote:
> Similar to commit ce6fa91b9363 ("mm/slub.c: add a naive detection
> of double free or corruption"), add a very cheap double-free check
> for SLAB under CONFIG_SLAB_FREELIST_HARDENED. With this added, the
> "SLAB_FREE_DOUBLE" LKDTM test passes under SLAB:
>
> lkdtm: Performing direct entry SLAB_FREE_DOUBLE
> lkdtm: Attempting double slab free ...
> ------------[ cut here ]------------
> WARNING: CPU: 2 PID: 2193 at mm/slab.c:757 ___cache _free+0x325/0x390
>
> Signed-off-by: Kees Cook <keescook@...omium.org>
No idea how much it helps in practice wrt security, but implementation-wise it
seems fine, so:
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Maybe you don't want to warn just once, though? We had similar discussion on
cache_to_obj().
> ---
> mm/slab.c | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index ebac5e400ad0..bbff6705ab2b 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -749,6 +749,16 @@ static void drain_alien_cache(struct kmem_cache *cachep,
> }
> }
>
> +/* &alien->lock must be held by alien callers. */
> +static __always_inline void __free_one(struct array_cache *ac, void *objp)
> +{
> + /* Avoid trivial double-free. */
> + if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
> + WARN_ON_ONCE(ac->avail > 0 && ac->entry[ac->avail - 1] == objp))
> + return;
> + ac->entry[ac->avail++] = objp;
> +}
> +
> static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
> int node, int page_node)
> {
> @@ -767,7 +777,7 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
> STATS_INC_ACOVERFLOW(cachep);
> __drain_alien_cache(cachep, ac, page_node, &list);
> }
> - ac->entry[ac->avail++] = objp;
> + __free_one(ac, objp);
> spin_unlock(&alien->lock);
> slabs_destroy(cachep, &list);
> } else {
> @@ -3457,7 +3467,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
> }
> }
>
> - ac->entry[ac->avail++] = objp;
> + __free_one(ac, objp);
> }
>
> /**
>
Powered by blists - more mailing lists