[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z5RkcB_wf5Y74BUM@kbusch-mbp>
Date: Fri, 24 Jan 2025 21:11:28 -0700
From: Keith Busch <kbusch@...nel.org>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/mmu: Ensure NX huge page recovery thread is
alive before waking
On Fri, Jan 24, 2025 at 03:46:23PM -0800, Sean Christopherson wrote:
> When waking a VM's NX huge page recovery thread, ensure the thread is
> actually alive before trying to wake it. Now that the thread is spawned
> on-demand during KVM_RUN, a VM without a recovery thread is reachable via
> the related module params.
Oh, this is what I thought we could do. I should have read ahead. :)
> +static void kvm_wake_nx_recovery_thread(struct kvm *kvm)
> +{
> + /*
> + * The NX recovery thread is spawned on-demand at the first KVM_RUN and
> + * may not be valid even though the VM is globally visible. Do nothing,
> + * as such a VM can't have any possible NX huge pages.
> + */
> + struct vhost_task *nx_thread = READ_ONCE(kvm->arch.nx_huge_page_recovery_thread);
> +
> + if (nx_thread)
> + vhost_task_wake(nx_thread);
> +}
> +
> static int get_nx_huge_pages(char *buffer, const struct kernel_param *kp)
> {
> if (nx_hugepage_mitigation_hard_disabled)
> @@ -7180,7 +7193,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
> kvm_mmu_zap_all_fast(kvm);
> mutex_unlock(&kvm->slots_lock);
>
> - vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
> + kvm_wake_nx_recovery_thread(kvm);
> }
> mutex_unlock(&kvm_lock);
> }
> @@ -7315,7 +7328,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
> mutex_lock(&kvm_lock);
>
> list_for_each_entry(kvm, &vm_list, vm_list)
> - vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
> + kvm_wake_nx_recovery_thread(kvm);
>
> mutex_unlock(&kvm_lock);
> }
> @@ -7451,14 +7464,20 @@ static void kvm_mmu_start_lpage_recovery(struct once *once)
> {
> struct kvm_arch *ka = container_of(once, struct kvm_arch, nx_once);
> struct kvm *kvm = container_of(ka, struct kvm, arch);
> + struct vhost_task *nx_thread;
>
> kvm->arch.nx_huge_page_last = get_jiffies_64();
> - kvm->arch.nx_huge_page_recovery_thread = vhost_task_create(
> - kvm_nx_huge_page_recovery_worker, kvm_nx_huge_page_recovery_worker_kill,
> - kvm, "kvm-nx-lpage-recovery");
> + nx_thread = vhost_task_create(kvm_nx_huge_page_recovery_worker,
> + kvm_nx_huge_page_recovery_worker_kill,
> + kvm, "kvm-nx-lpage-recovery");
>
> - if (kvm->arch.nx_huge_page_recovery_thread)
> - vhost_task_start(kvm->arch.nx_huge_page_recovery_thread);
> + if (!nx_thread)
> + return;
> +
> + vhost_task_start(nx_thread);
> +
> + /* Make the task visible only once it is fully started. */
> + WRITE_ONCE(kvm->arch.nx_huge_page_recovery_thread, nx_thread);
I believe the WRITE_ONCE needs to happen before the vhost_task_start to
ensure the parameter update callback can see it before it's started.
Powered by blists - more mailing lists