[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJaqyWfYHqf2=8BMo5ReKEB137fxGZR4XEJ2d4imXOOXAX2wHQ@mail.gmail.com>
Date: Tue, 13 Feb 2024 17:10:54 +0100
From: Eugenio Perez Martin <eperezma@...hat.com>
To: Steve Sistare <steven.sistare@...cle.com>
Cc: virtualization@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
"Michael S. Tsirkin" <mst@...hat.com>, Jason Wang <jasowang@...hat.com>, Si-Wei Liu <si-wei.liu@...cle.com>,
Xie Yongji <xieyongji@...edance.com>, Stefano Garzarella <sgarzare@...hat.com>
Subject: Re: [PATCH V2 3/3] vdpa_sim: flush workers on suspend
On Mon, Feb 12, 2024 at 6:16 PM Steve Sistare <steven.sistare@...cle.com> wrote:
>
> Flush to guarantee no workers are running when suspend returns.
>
> Signed-off-by: Steve Sistare <steven.sistare@...cle.com>
> ---
> drivers/vdpa/vdpa_sim/vdpa_sim.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
> index be2925d0d283..a662b90357c3 100644
> --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
> +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
> @@ -74,6 +74,17 @@ static void vdpasim_worker_change_mm_sync(struct vdpasim *vdpasim,
> kthread_flush_work(work);
> }
>
> +static void flush_work_fn(struct kthread_work *work) {}
> +
> +static void vdpasim_flush_work(struct vdpasim *vdpasim)
> +{
> + struct kthread_work work;
> +
> + kthread_init_work(&work, flush_work_fn);
If the work is already queued, doesn't it break the linked list
because of the memset in kthread_init_work?
> + kthread_queue_work(vdpasim->worker, &work);
> + kthread_flush_work(&work);
> +}
> +
> static struct vdpasim *vdpa_to_sim(struct vdpa_device *vdpa)
> {
> return container_of(vdpa, struct vdpasim, vdpa);
> @@ -511,6 +522,8 @@ static int vdpasim_suspend(struct vdpa_device *vdpa)
> vdpasim->running = false;
> mutex_unlock(&vdpasim->mutex);
>
> + vdpasim_flush_work(vdpasim);
Do we need to protect the case where vdpasim_kick_vq and
vdpasim_suspend are called "at the same time"? Correct userland should
not be doing it but buggy or mailious could be. Just calling
vdpasim_flush_work with the mutex acquired would solve the issue,
doesn't it?
Thanks!
> +
> return 0;
> }
>
> --
> 2.39.3
>
Powered by blists - more mailing lists