[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4izeqsk6wgptwbk37qlbsp4fmxwgen6xyqqscrvgcejkoeh6nn@7g433deykx3x>
Date: Tue, 10 Jun 2025 15:33:09 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: Manos Pitsidianakis <manos.pitsidianakis@...aro.org>
Cc: David Airlie <airlied@...hat.com>, Gerd Hoffmann <kraxel@...hat.com>,
Dmitry Osipenko <dmitry.osipenko@...labora.com>, Gurchetan Singh <gurchetansingh@...omium.org>,
Chia-I Wu <olvaffe@...il.com>, Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
Simona Vetter <simona@...ll.ch>, Alex Bennée <alex.bennee@...aro.org>,
Viresh Kumar <viresh.kumar@...aro.org>, dri-devel@...ts.freedesktop.org, virtualization@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] virtgpu: add virtio_gpu_fence_cleanup()
On Mon, May 05, 2025 at 11:59:15AM +0300, Manos Pitsidianakis wrote:
>When virtio_gpu_remove() is called, there might be in-flight command
>objects in the virtqueues that the VIRTIO device hasn't processed. These
>commands might use fences, which end up being leaked, as reported by
>/sys/kernel/debug/kmemleak.
>
>This commit adds a cleanup function that lowers the reference count of
>all in-flight fences, resulting in their de-allocation.
>
>Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@...aro.org>
>---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
> drivers/gpu/drm/virtio/virtgpu_fence.c | 12 ++++++++++++
> drivers/gpu/drm/virtio/virtgpu_kms.c | 1 +
> 3 files changed, 14 insertions(+)
>
>diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
>index b3d367be6f204dbc98bf1c6e5c43a37ac8c0d8b3..c94b5edb2aec42fe5cd6416e243cf40e4e2b060f 100644
>--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
>+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
>@@ -465,6 +465,7 @@ void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
> struct virtio_gpu_fence *fence);
> void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev,
> u64 fence_id);
>+void virtio_gpu_fence_cleanup(struct virtio_gpu_device *vdev);
>
> /* virtgpu_object.c */
> void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo);
>diff --git a/drivers/gpu/drm/virtio/virtgpu_fence.c b/drivers/gpu/drm/virtio/virtgpu_fence.c
>index 44c1d8ef3c4d07881e2c4c92cc67f6aba7a5df4f..3e536d190c0464f4db8955605bbf0aa4aa3612bd 100644
>--- a/drivers/gpu/drm/virtio/virtgpu_fence.c
>+++ b/drivers/gpu/drm/virtio/virtgpu_fence.c
>@@ -157,3 +157,15 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vgdev,
> }
> spin_unlock_irqrestore(&drv->lock, irq_flags);
> }
>+
>+void virtio_gpu_fence_cleanup(struct virtio_gpu_device *vgdev)
>+{
>+ struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
>+ struct virtio_gpu_fence *curr, *tmp;
>+
>+ list_for_each_entry_safe(curr, tmp, &drv->fences, node) {
I don't know this code, but I see that when we access `drv->fences` we
hold `drv->lock`, should we do the same here? (or it isn't needed since
we are in the cleaning phase?)
The rest LGTM!
Thanks,
Stefano
>+ dma_fence_signal_locked(&curr->f);
>+ list_del(&curr->node);
>+ dma_fence_put(&curr->f);
>+ }
>+}
>diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
>index da70d9248072b64786a5d48b71bccaa80b8aae8f..7b3c4d314f8eee692e2842a7056d6dc64936fc2f 100644
>--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
>+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
>@@ -286,6 +286,7 @@ void virtio_gpu_deinit(struct drm_device *dev)
> flush_work(&vgdev->cursorq.dequeue_work);
> flush_work(&vgdev->config_changed_work);
> virtio_reset_device(vgdev->vdev);
>+ virtio_gpu_fence_cleanup(vgdev);
> virtio_gpu_queue_cleanup(vgdev);
> vgdev->vdev->config->del_vqs(vgdev->vdev);
> }
>
>--
>2.47.2
>
>
Powered by blists - more mailing lists