[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250826141516.f_jWThaV@linutronix.de>
Date: Tue, 26 Aug 2025 16:15:16 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Sean Christopherson <seanjc@...gle.com>
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jason Wang <jasowang@...hat.com>, kvm@...r.kernel.org,
virtualization@...ts.linux.dev, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] vhost_task: KVM: Don't wake KVM x86's recovery
thread if vhost task was killed
On 2025-08-26 07:03:33 [-0700], Sean Christopherson wrote:
> And the call from __vhost_worker_flush() is done while holding a vhost_worker.mutex.
> That's probably ok? But there are many paths that lead to __vhost_worker_flush(),
> which makes it difficult to audit all flows. So even if there is an easy change
> for the RCU conflict, I wouldn't be comfortable adding a mutex_lock() to so many
> flows in a patch that needs to go to stable@.
If I may throw something else into the mix: If you do "early"
get_task_struct() on the thread (within the thread), then you could wake
it even after its do_exit() since the task_struct would remain valid.
Once you remove it from all structs where it can be found, you would do
the final put_task_struct().
Sebastian
Powered by blists - more mailing lists