lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aEARz1yOtGfudqNk@hyeyoo>
Date: Wed, 4 Jun 2025 18:28:47 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>, Vlastimil Babka <vbabka@...e.cz>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        Rakie Kim <rakie.kim@...com>, Hyeonggon Yoo <42.hyeyoo@...il.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/3] mm,slub: Do not special case N_NORMAL nodes for
 slab_nodes

On Tue, Jun 03, 2025 at 01:08:48PM +0200, Oscar Salvador wrote:
> Currently, slab_mem_going_going_callback() checks whether the node has
> N_NORMAL memory in order to be set in slab_nodes.
> While it is true that gettind rid of that enforcing would mean

nit: gettind -> getting

> ending up with movables nodes in slab_nodes, the memory waste that comes
> with that is negligible.
> 
> So stop checking for status_change_nid_normal and just use status_change_nid
> instead which works for both types of memory.
> 
> Also, once we allocate the kmem_cache_node cache  for the node in
> slab_mem_online_callback(), we never deallocate it in
> slab_mem_off_callback() when the node goes memoryless, so we can just
> get rid of it.
> 
> The side effects are that we will stop clearing the node from slab_nodes,
> and also that newly created kmem caches after node hotremove will now allocate
> their kmem_cache_node for the node(s) that was hotremoved, but these
> should be negligible.
> 
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>

Looks good to me,
Reviewed-by: Harry Yoo <harry.yoo@...cle.com>

-- 
Cheers,
Harry / Hyeonggon

> ---
>  mm/slub.c | 34 +++-------------------------------
>  1 file changed, 3 insertions(+), 31 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index be8b09e09d30..f92b43d36adc 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -447,7 +447,7 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>  
>  /*
>   * Tracks for which NUMA nodes we have kmem_cache_nodes allocated.
> - * Corresponds to node_state[N_NORMAL_MEMORY], but can temporarily
> + * Corresponds to node_state[N_MEMORY], but can temporarily
>   * differ during memory hotplug/hotremove operations.
>   * Protected by slab_mutex.
>   */
> @@ -6160,36 +6160,12 @@ static int slab_mem_going_offline_callback(void *arg)
>  	return 0;
>  }
>  
> -static void slab_mem_offline_callback(void *arg)
> -{
> -	struct memory_notify *marg = arg;
> -	int offline_node;
> -
> -	offline_node = marg->status_change_nid_normal;
> -
> -	/*
> -	 * If the node still has available memory. we need kmem_cache_node
> -	 * for it yet.
> -	 */
> -	if (offline_node < 0)
> -		return;
> -
> -	mutex_lock(&slab_mutex);
> -	node_clear(offline_node, slab_nodes);
> -	/*
> -	 * We no longer free kmem_cache_node structures here, as it would be
> -	 * racy with all get_node() users, and infeasible to protect them with
> -	 * slab_mutex.
> -	 */
> -	mutex_unlock(&slab_mutex);
> -}
> -
>  static int slab_mem_going_online_callback(void *arg)
>  {
>  	struct kmem_cache_node *n;
>  	struct kmem_cache *s;
>  	struct memory_notify *marg = arg;
> -	int nid = marg->status_change_nid_normal;
> +	int nid = marg->status_change_nid;
>  	int ret = 0;
>  
>  	/*
> @@ -6247,10 +6223,6 @@ static int slab_memory_callback(struct notifier_block *self,
>  	case MEM_GOING_OFFLINE:
>  		ret = slab_mem_going_offline_callback(arg);
>  		break;
> -	case MEM_OFFLINE:
> -	case MEM_CANCEL_ONLINE:
> -		slab_mem_offline_callback(arg);
> -		break;
>  	case MEM_ONLINE:
>  	case MEM_CANCEL_OFFLINE:
>  		break;
> @@ -6321,7 +6293,7 @@ void __init kmem_cache_init(void)
>  	 * Initialize the nodemask for which we will allocate per node
>  	 * structures. Here we don't need taking slab_mutex yet.
>  	 */
> -	for_each_node_state(node, N_NORMAL_MEMORY)
> +	for_each_node_state(node, N_MEMORY)
>  		node_set(node, slab_nodes);
>  
>  	create_boot_cache(kmem_cache_node, "kmem_cache_node",
> -- 
> 2.49.0
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ