[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2a18634f-b100-334e-f7b5-01c84302e27e@redhat.com>
Date: Thu, 6 May 2021 13:35:01 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>, kvm@...r.kernel.org
Cc: Sean Christopherson <seanjc@...gle.com>, x86@...nel.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: KVM: x86: Cancel pvclock_gtod_work on module removal
On 05/05/21 23:48, Thomas Gleixner wrote:
> Nothing prevents the following:
>
> pvclock_gtod_notify()
> queue_work(system_long_wq, &pvclock_gtod_work);
> ...
> remove_module(kvm);
> ...
> work_queue_run()
> pvclock_gtod_work() <- UAF
>
> Ditto for any other operation on that workqueue list head which touches
> pvclock_gtod_work after module removal.
>
> Cancel the work in kvm_arch_exit() to prevent that.
>
> Fixes: 16e8d74d2da9 ("KVM: x86: notifier for clocksource changes")
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> ---
> Found by inspection because of:
> https://lkml.kernel.org/r/0000000000001d43ac05c0f5c6a0@google.com
> See also:
> https://lkml.kernel.org/r/20210505105940.190490250@infradead.org
>
> TL;DR: Scheduling work with tk_core.seq write held is a bad idea.
> ---
> arch/x86/kvm/x86.c | 1 +
> 1 file changed, 1 insertion(+)
>
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8168,6 +8168,7 @@ void kvm_arch_exit(void)
> cpuhp_remove_state_nocalls(CPUHP_AP_X86_KVM_CLK_ONLINE);
> #ifdef CONFIG_X86_64
> pvclock_gtod_unregister_notifier(&pvclock_gtod_notifier);
> + cancel_work_sync(&pvclock_gtod_work);
> #endif
> kvm_x86_ops.hardware_enable = NULL;
> kvm_mmu_module_exit();
>
Queued, thanks (with added Cc to stable).
Paolo
Powered by blists - more mailing lists