lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <25db026b-76bc-cad3-7913-c310fc6cd822@suse.cz>
Date:   Tue, 5 Oct 2021 12:50:08 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Miaohe Lin <linmiaohe@...wei.com>, akpm@...ux-foundation.org,
        cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
        iamjoonsoo.kim@....com
Cc:     gregkh@...uxfoundation.org, faiyazm@...eaurora.org,
        andreyknvl@...il.com, ryabinin.a.a@...il.com, thgarnie@...gle.com,
        keescook@...omium.org, bharata@...ux.ibm.com, guro@...com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk
 free

On 9/16/21 14:39, Miaohe Lin wrote:
> kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects
> when doing bulk free. So we shouldn't call memcg_slab_free_hook() again
> for bulk free to avoid incorrect memcg slab count.
> 
> Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()")
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>

Reviewed-by: Vlastimil Babka <vbabka@...e.cz>

I now noticed the series doesn't Cc: stable and it should, so I hope Andrew
can add those together with the review tags. Thanks.

> ---
>  mm/slub.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index f3df0f04a472..d8f77346376d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
>  	struct kmem_cache_cpu *c;
>  	unsigned long tid;
>  
> -	memcg_slab_free_hook(s, &head, 1);
> +	/* memcg_slab_free_hook() is already called for bulk free. */
> +	if (!tail)
> +		memcg_slab_free_hook(s, &head, 1);
>  redo:
>  	/*
>  	 * Determine the currently cpus per cpu slab.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ