[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b0351d3-4753-1d69-a115-60b20c69656c@suse.cz>
Date: Tue, 5 Oct 2021 11:57:42 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Miaohe Lin <linmiaohe@...wei.com>, akpm@...ux-foundation.org,
cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
iamjoonsoo.kim@....com
Cc: gregkh@...uxfoundation.org, faiyazm@...eaurora.org,
andreyknvl@...il.com, ryabinin.a.a@...il.com, thgarnie@...gle.com,
keescook@...omium.org, bharata@...ux.ibm.com, guro@...com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist
depth and cnt
On 9/16/21 14:39, Miaohe Lin wrote:
> If object's reuse is delayed, it will be excluded from the reconstructed
> freelist. But we forgot to adjust the cnt accordingly. So there will be
> a mismatch between reconstructed freelist depth and cnt. This will lead
> to free_debug_processing() complain about freelist count or a incorrect
> slub inuse count.
>
> Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook")
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
I was worried about taking pointer of the cnt parameter when it's hardcoded
1, whether it would destroy inlining. Looks like not, luckily, the function
is just renamed:
> ./scripts/bloat-o-meter mm/slub.o slub.o.after
add/remove: 1/1 grow/shrink: 0/0 up/down: 292/-292 (0)
Function old new delta
slab_free_freelist_hook.constprop - 292 +292
slab_free_freelist_hook 292 - -292
> ---
> mm/slub.c | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index ed160b6c54f8..a56a6423d4e8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s,
> }
>
> static inline bool slab_free_freelist_hook(struct kmem_cache *s,
> - void **head, void **tail)
> + void **head, void **tail,
> + int *cnt)
> {
>
> void *object;
> @@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
> *head = object;
> if (!*tail)
> *tail = object;
> + } else {
> + /*
> + * Adjust the reconstructed freelist depth
> + * accordingly if object's reuse is delayed.
> + */
> + --(*cnt);
> }
> } while (object != old_tail);
>
> @@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
> * With KASAN enabled slab_free_freelist_hook modifies the freelist
> * to remove objects, whose reuse must be delayed.
> */
> - if (slab_free_freelist_hook(s, &head, &tail))
> + if (slab_free_freelist_hook(s, &head, &tail, &cnt))
> do_slab_free(s, page, head, tail, cnt, addr);
> }
>
>
Powered by blists - more mailing lists