lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f25e53a-6d18-6ffd-7e0e-2cce5e632ffc@wanadoo.fr>
Date:   Fri, 25 Mar 2022 17:46:37 +0100
From:   Christophe JAILLET <christophe.jaillet@...adoo.fr>
To:     Pavel Skripkin <paskripkin@...il.com>, pbonzini@...hat.com,
        seanjc@...gle.com, vkuznets@...hat.com, wanpengli@...cent.com,
        jmattson@...gle.com
Cc:     x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        syzbot+717ed82268812a643b28@...kaller.appspotmail.com
Subject: Re: [RFC PATCH] KVM: x86/mmu: fix general protection fault in
 kvm_mmu_uninit_tdp_mmu

Le 25/03/2022 à 17:38, Pavel Skripkin a écrit :
> Syzbot reported GPF in kvm_mmu_uninit_tdp_mmu(), which is caused by
> passing NULL pointer to flush_workqueue().
> 
> tdp_mmu_zap_wq is allocated via alloc_workqueue() which may fail. There
> is no error hanling and kvm_mmu_uninit_tdp_mmu() return value is simply
> ignored. Even all kvm_*_init_vm() functions are void, so the easiest
> solution is to check that tdp_mmu_zap_wq is valid pointer before passing
> it somewhere.
> 
> Fixes: 22b94c4b63eb ("KVM: x86/mmu: Zap invalidated roots via asynchronous worker")
> Reported-and-tested-by: syzbot+717ed82268812a643b28@...kaller.appspotmail.com
> Signed-off-by: Pavel Skripkin <paskripkin@...il.com>
> ---
>   arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++++++-----
>   1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index e7e7876251b3..b3e8ff7ac5b0 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -48,8 +48,10 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
>   	if (!kvm->arch.tdp_mmu_enabled)
>   		return;
>   
> -	flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
> -	destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
> +	if (kvm->arch.tdp_mmu_zap_wq) {
> +		flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
> +		destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);

Hi,
unrelated to the patch, but flush_workqueue() is redundant and could be 
removed.
destroy_workqueue() already drains the queue.

Just my 2c,
CJ

> +	}
>   
>   	WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages));
>   	WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
> @@ -119,9 +121,11 @@ static void tdp_mmu_zap_root_work(struct work_struct *work)
>   
>   static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root)
>   {
> -	root->tdp_mmu_async_data = kvm;
> -	INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
> -	queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
> +	if (kvm->arch.tdp_mmu_zap_wq) {
> +		root->tdp_mmu_async_data = kvm;
> +		INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
> +		queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
> +	}
>   }
>   
>   static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ