[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be01d2f7-8423-4e10-b65b-a84a7bc7c99e@collabora.com>
Date: Mon, 2 Dec 2024 17:29:05 +0300
From: Dmitry Osipenko <dmitry.osipenko@...labora.com>
To: Ryosuke Yasuoka <ryasuoka@...hat.com>, airlied@...hat.com,
kraxel@...hat.com, maarten.lankhorst@...ux.intel.com, mripard@...nel.org,
tzimmermann@...e.de, daniel@...ll.ch, jfalempe@...hat.com,
christophe.jaillet@...adoo.fr
Cc: virtualization@...ts.linux.dev, spice-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v5] drm/virtio: Add drm_panic support
On 11/29/24 15:24, Ryosuke Yasuoka wrote:
....
> +static int virtio_drm_get_scanout_buffer(struct drm_plane *plane,
> + struct drm_scanout_buffer *sb)
> +{
> + struct virtio_gpu_object *bo;
> +
> + if (!plane->state || !plane->state->fb || !plane->state->visible)
> + return -ENODEV;
> +
> + bo = gem_to_virtio_gpu_obj(plane->state->fb->obj[0]);
> + if (virtio_gpu_is_vram(bo))
> + return -ENODEV;
VirtIO-GPU now supports importing external dmabufs, we should reject
bo->base.base.import_attach here now too.
> +
> + /* try to vmap it if possible */
> + if (!bo->base.vaddr) {
> + int ret;
> +
> + ret = drm_gem_shmem_vmap(&bo->base, &sb->map[0]);
> + if (ret)
> + return ret;
I've now noticed that drm_gem_shmem_vmap() expects BO reservation lock
to be held and we can't take lock it at a panic time.
https://elixir.bootlin.com/linux/v6.12.1/source/drivers/gpu/drm/drm_gem_shmem_helper.c#L330
This resv warning isn't triggered because bo->base.vaddr is set for VT
framebufffer BO when panic happens.
We actually should reject all BOs that don't have bo->base.vaddr set at
the panic time. Please correct it in v6 and rebase on top of a recent
drm-next tree.
> + } else {
> + iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr);
> + }
> +
> + sb->format = plane->state->fb->format;
> + sb->height = plane->state->fb->height;
> + sb->width = plane->state->fb->width;
> + sb->pitch[0] = plane->state->fb->pitches[0];
> + return 0;
> +}
...
> +static void virtio_panic_flush(struct drm_plane *plane)
> +{
> + struct virtio_gpu_object *bo;
> + struct drm_device *dev = plane->dev;
> + struct virtio_gpu_device *vgdev = dev->dev_private;
> + struct drm_rect rect;
> + void *p_vbuf = vgdev->panic_vbuf;
> + struct virtio_gpu_vbuffer *vbuf_dumb_bo = p_vbuf;
> + struct virtio_gpu_vbuffer *vbuf_resource_flush = p_vbuf + VBUFFER_SIZE;
This p_vbuf + VBUFFER_SIZE looks suspicious. The VBUFFER_SIZE macro
isn't guarded with parentheses (), hence this vbuf_resource_flush is
pointing at the CMD part of the vbuf_dumb_bo?
Won't be using kmem_cache_zalloc(vgdev->vbufs, GFP_ATOMIC) to
dynamically allocate both buffers make everything cleaner?
...
> -#define MAX_INLINE_CMD_SIZE 96
> -#define MAX_INLINE_RESP_SIZE 24
> -#define VBUFFER_SIZE (sizeof(struct virtio_gpu_vbuffer) \
> - + MAX_INLINE_CMD_SIZE \
> - + MAX_INLINE_RESP_SIZE)...
--
Best regards,
Dmitry
Powered by blists - more mailing lists