lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ee56b1c9-c783-4adf-8ea1-6601cbfb9535@oracle.com>
Date: Tue, 23 Apr 2024 10:37:43 -0700
From: Jianfeng Wang <jianfeng.w.wang@...cle.com>
To: Bert Karwatzki <spasswolf@....de>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [External] : [PATCH] mm: slub: Fix compilation without
 CONFIG_SLUB_DEBUG



On 4/23/24 5:40 AM, Bert Karwatzki wrote:
> Since the introduction of count_partial_free_approx the compilation of
> linux fails with an implicit declaration of function ‘node_nr_objs’
> because count_partial_free_approx is compiled when SLAB_SUPPORTS_SYSFS
> is defined even without CONFIG_SLUB_DEBUG. As count_partial_free_approx
> is only used when CONFIG_SLUB_DEBUG is defined it should only be
> compiled in that case.
> 

Hi Bert,

Thanks for noticing this.
The original patch was updated to fix it and was sent to the MM maillist.
It has been applied several hours ago.

Link: https://lore.kernel.org/linux-mm/20240423045554.15045-1-jianfeng.w.wang@oracle.com/T/#m6ec634d0d214bea8807deac8cb15bf27dd47743d

> Fixes: commit 1c5610f451be ("slub: introduce count_partial_free_approx()")
> Signed-off-by: Bert Karwatzki <spasswolf@....de>
> ---
>  mm/slub.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index a3b6f05be2b9..a547ed041bc7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3226,6 +3226,7 @@ static inline int node_match(struct slab *slab, int node)
>  }
> 
>  #ifdef CONFIG_SLUB_DEBUG
> +#define MAX_PARTIAL_TO_SCAN 10000
>  static int count_free(struct slab *slab)
>  {
>  	return slab->objects - slab->inuse;
> @@ -3293,10 +3294,6 @@ static inline bool free_debug_processing(struct kmem_cache *s,
> 
>  	return checks_ok;
>  }
> -#endif /* CONFIG_SLUB_DEBUG */
> -
> -#if defined(CONFIG_SLUB_DEBUG) || defined(SLAB_SUPPORTS_SYSFS)
> -#define MAX_PARTIAL_TO_SCAN 10000
> 
>  static unsigned long count_partial_free_approx(struct kmem_cache_node *n)
>  {
> @@ -3332,7 +3329,9 @@ static unsigned long count_partial_free_approx(struct kmem_cache_node *n)
>  	spin_unlock_irqrestore(&n->list_lock, flags);
>  	return x;
>  }
> +#endif /* CONFIG_SLUB_DEBUG */
> 
> +#if defined(CONFIG_SLUB_DEBUG) || defined(SLAB_SUPPORTS_SYSFS)
>  static unsigned long count_partial(struct kmem_cache_node *n,
>  					int (*get_count)(struct slab *))
>  {
> --
> 2.43.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ