lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0cad1ff3-8339-a3eb-fc36-c8bda1392451@suse.cz>
Date:   Tue, 14 Mar 2023 15:41:26 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Chen Jun <chenjun102@...wei.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, cl@...ux.com, penberg@...nel.org,
        rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc:     xuqiang36@...wei.com, wangkefeng.wang@...wei.com
Subject: Re: [PATCH] mm/slub: Reduce memory consumption in extreme scenarios


On 3/14/23 13:34, Chen Jun wrote:
> When kmalloc_node() is called without __GFP_THISNODE and the target node
> lacks sufficient memory, SLUB allocates a folio from a different node
> other than the requested node, instead of taking a partial slab from it.
> 
> However, since the allocated folio does not belong to the requested
> node, it is deactivated and added to the partial slab list of the node
> it belongs to.
> 
> This behavior can result in excessive memory usage when the requested
> node has insufficient memory, as SLUB will repeatedly allocate folios
> from other nodes without reusing the previously allocated ones.
> 
> To prevent memory wastage,
> when (node != NUMA_NO_NODE) && (gfpflags & __GFP_THISNODE) is:
> 1) try to get a partial slab from target node with __GFP_THISNODE.
> 2) if 1) failed, try to allocate a new slab from target node with
>    __GFP_THISNODE.
> 3) if 2) failed, retry 1) and 2) without __GFP_THISNODE constraint.
> 
> when node != NUMA_NO_NODE || (gfpflags & __GFP_THISNODE), the behavior
> remains unchanged.
> 
> On qemu with 4 numa nodes and each numa has 1G memory. Write a test ko
> to call kmalloc_node(196, GFP_KERNEL, 3) for (4 * 1024 + 4) * 1024 times.
> 
> cat /proc/slabinfo shows:
> kmalloc-256       4200530 13519712    256   32    2 : tunables..
> 
> after this patch,
> cat /proc/slabinfo shows:
> kmalloc-256       4200558 4200768    256   32    2 : tunables..
> 
> Signed-off-by: Chen Jun <chenjun102@...wei.com>
> ---
>  mm/slub.c | 22 +++++++++++++++++++---
>  1 file changed, 19 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 39327e98fce3..32e436957e03 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2384,7 +2384,7 @@ static void *get_partial(struct kmem_cache *s, int node, struct partial_context
>  		searchnode = numa_mem_id();
>  
>  	object = get_partial_node(s, get_node(s, searchnode), pc);
> -	if (object || node != NUMA_NO_NODE)
> +	if (object || (node != NUMA_NO_NODE && (pc->flags & __GFP_THISNODE)))
>  		return object;
>  
>  	return get_any_partial(s, pc);
> @@ -3069,6 +3069,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  	struct slab *slab;
>  	unsigned long flags;
>  	struct partial_context pc;
> +	bool try_thisnode = true;
>  
>  	stat(s, ALLOC_SLOWPATH);
>  
> @@ -3181,8 +3182,18 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  	}
>  
>  new_objects:
> -
>  	pc.flags = gfpflags;
> +
> +	/*
> +	 * when (node != NUMA_NO_NODE) && (gfpflags & __GFP_THISNODE)
> +	 * 1) try to get a partial slab from target node with __GFP_THISNODE.
> +	 * 2) if 1) failed, try to allocate a new slab from target node with
> +	 *    __GFP_THISNODE.
> +	 * 3) if 2) failed, retry 1) and 2) without __GFP_THISNODE constraint.
> +	 */
> +	if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode)
> +			pc.flags |= __GFP_THISNODE;

Hmm I'm thinking we should also perhaps remove direct reclaim possibilities
from the attempt 2). In your qemu test it should make no difference, as it
fills everything with kernel memory that is not reclaimable. But in practice
the target node might be filled with user memory, and I think it's better to
quickly allocate on a different node than spend time in direct reclaim. So
the following should work I think?

pc.flags = GFP_NOWAIT | __GFP_NOWARN |__GFP_THISNODE

> +
>  	pc.slab = &slab;
>  	pc.orig_size = orig_size;
>  	freelist = get_partial(s, node, &pc);
> @@ -3190,10 +3201,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  		goto check_new_slab;
>  
>  	slub_put_cpu_ptr(s->cpu_slab);
> -	slab = new_slab(s, gfpflags, node);
> +	slab = new_slab(s, pc.flags, node);
>  	c = slub_get_cpu_ptr(s->cpu_slab);
>  
>  	if (unlikely(!slab)) {
> +		if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode) {
> +			try_thisnode = false;
> +			goto new_objects;
> +		}
> +
>  		slab_out_of_memory(s, gfpflags, node);
>  		return NULL;
>  	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ