lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6594a42c-0f53-4124-9177-d1c631d9764f@suse.cz>
Date: Fri, 1 Mar 2024 17:03:58 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: chengming.zhou@...ux.dev, cl@...ux.com
Cc: penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
 akpm@...ux-foundation.org, roman.gushchin@...ux.dev, 42.hyeyoo@...il.com,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm, slab: remove the corner case of inc_slabs_node()

On 2/22/24 14:02, chengming.zhou@...ux.dev wrote:
> From: Chengming Zhou <chengming.zhou@...ux.dev>
> 
> We already have the inc_slabs_node() after kmem_cache_node->node[node]
> initialized in early_kmem_cache_node_alloc(), this special case of
> inc_slabs_node() can be removed. Then we don't need to consider the
> existence of kmem_cache_node in inc_slabs_node() anymore.
> 
> Signed-off-by: Chengming Zhou <chengming.zhou@...ux.dev>

Well spotted, thank. Added to slab/for-next.

> ---
>  mm/slub.c | 13 ++-----------
>  1 file changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 284b751b3b64..3f413e5e1415 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1500,16 +1500,8 @@ static inline void inc_slabs_node(struct kmem_cache *s, int node, int objects)
>  {
>  	struct kmem_cache_node *n = get_node(s, node);
>  
> -	/*
> -	 * May be called early in order to allocate a slab for the
> -	 * kmem_cache_node structure. Solve the chicken-egg
> -	 * dilemma by deferring the increment of the count during
> -	 * bootstrap (see early_kmem_cache_node_alloc).
> -	 */
> -	if (likely(n)) {
> -		atomic_long_inc(&n->nr_slabs);
> -		atomic_long_add(objects, &n->total_objects);
> -	}
> +	atomic_long_inc(&n->nr_slabs);
> +	atomic_long_add(objects, &n->total_objects);
>  }
>  static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects)
>  {
> @@ -4877,7 +4869,6 @@ static void early_kmem_cache_node_alloc(int node)
>  	slab = new_slab(kmem_cache_node, GFP_NOWAIT, node);
>  
>  	BUG_ON(!slab);
> -	inc_slabs_node(kmem_cache_node, slab_nid(slab), slab->objects);
>  	if (slab_nid(slab) != node) {
>  		pr_err("SLUB: Unable to allocate memory from node %d\n", node);
>  		pr_err("SLUB: Allocating a useless per node structure in order to be able to continue\n");


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ