lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1658686a-1f40-7e7e-e1b0-70af6908a9d0@redhat.com>
Date:   Thu, 11 Jan 2018 16:53:32 +0100
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Arnd Bergmann <arnd@...db.de>,
        Radim Krčmář <rkrcmar@...hat.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
        David Hildenbrand <david@...hat.com>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Junaid Shahid <junaids@...gle.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86: kvm: propagate register_shrinker return code

On 10/01/2018 17:26, Arnd Bergmann wrote:
> Patch "mm,vmscan: mark register_shrinker() as __must_check" is
> queued for 4.16 in linux-mm and adds a warning about the unchecked
> call to register_shrinker:
> 
> arch/x86/kvm/mmu.c:5485:2: warning: ignoring return value of 'register_shrinker', declared with attribute warn_unused_result [-Wunused-result]
> 
> This changes the kvm_mmu_module_init() function to fail itself
> when the call to register_shrinker fails.
> 
> Signed-off-by: Arnd Bergmann <arnd@...db.de>
> ---
>  arch/x86/kvm/mmu.c | 16 ++++++++++------
>  1 file changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 89da688784fa..765c8e9df5d9 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -5465,30 +5465,34 @@ static void mmu_destroy_caches(void)
>  
>  int kvm_mmu_module_init(void)
>  {
> +	int ret = -ENOMEM;
> +
>  	kvm_mmu_clear_all_pte_masks();
>  
>  	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
>  					    sizeof(struct pte_list_desc),
>  					    0, SLAB_ACCOUNT, NULL);
>  	if (!pte_list_desc_cache)
> -		goto nomem;
> +		goto out;
>  
>  	mmu_page_header_cache = kmem_cache_create("kvm_mmu_page_header",
>  						  sizeof(struct kvm_mmu_page),
>  						  0, SLAB_ACCOUNT, NULL);
>  	if (!mmu_page_header_cache)
> -		goto nomem;
> +		goto out;
>  
>  	if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> -		goto nomem;
> +		goto out;
>  
> -	register_shrinker(&mmu_shrinker);
> +	ret = register_shrinker(&mmu_shrinker);
> +	if (ret)
> +		goto out;
>  
>  	return 0;
>  
> -nomem:
> +out:
>  	mmu_destroy_caches();
> -	return -ENOMEM;
> +	return ret;
>  }
>  
>  /*
> 

Queued, thanks.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ