lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Jul 2021 14:52:08 +0800
From:   Kefeng Wang <wangkefeng.wang@...wei.com>
To:     Shakeel Butt <shakeelb@...gle.com>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Vlastimil Babka <vbabka@...e.cz>
CC:     Michal Hocko <mhocko@...e.com>, Roman Gushchin <guro@...com>,
        Wang Hai <wanghai38@...wei.com>,
        Muchun Song <songmuchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free


On 2021/7/28 23:53, Shakeel Butt wrote:
> SLUB uses page allocator for higher order allocations and update
> unreclaimable slab stat for such allocations. At the moment, the bulk
> free for SLUB does not share code with normal free code path for these
> type of allocations and have missed the stat update. So, fix the stat
> update by common code. The user visible impact of the bug is the
> potential of inconsistent unreclaimable slab stat visible through
> meminfo and vmstat.
>
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> ---
>   mm/slub.c | 22 ++++++++++++----------
>   1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 6dad2b6fda6f..03770291aa6b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3238,6 +3238,16 @@ struct detached_freelist {
>   	struct kmem_cache *s;
>   };
>   
> +static inline void free_nonslab_page(struct page *page)
> +{
> +	unsigned int order = compound_order(page);
> +
> +	VM_BUG_ON_PAGE(!PageCompound(page), page);

Could we add WARN_ON here, or we got nothing when CONFIG_DEBUG_VM is 
disabled.


> +	kfree_hook(page_address(page));
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
> +	__free_pages(page, order);
> +}
> +
>   /*
>    * This function progressively scans the array with free objects (with
>    * a limited look ahead) and extract objects belonging to the same
> @@ -3274,9 +3284,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
>   	if (!s) {
>   		/* Handle kalloc'ed objects */
>   		if (unlikely(!PageSlab(page))) {
> -			BUG_ON(!PageCompound(page));
> -			kfree_hook(object);
> -			__free_pages(page, compound_order(page));
> +			free_nonslab_page(page);
>   			p[size] = NULL; /* mark object processed */
>   			return size;
>   		}
> @@ -4252,13 +4260,7 @@ void kfree(const void *x)
>   
>   	page = virt_to_head_page(x);
>   	if (unlikely(!PageSlab(page))) {
> -		unsigned int order = compound_order(page);
> -
> -		BUG_ON(!PageCompound(page));
> -		kfree_hook(object);
> -		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> -				      -(PAGE_SIZE << order));
> -		__free_pages(page, order);
> +		free_nonslab_page(page);
>   		return;
>   	}
>   	slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ