[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6AEGtrQ7rcK6sEbiaHa72cebGbrdS0RNS22T07XQwCM2sQ0g@mail.gmail.com>
Date: Fri, 18 Feb 2022 09:51:25 -0800
From: Rob Clark <robdclark@...il.com>
To: Chia-I Wu <olvaffe@...il.com>
Cc: ML dri-devel <dri-devel@...ts.freedesktop.org>,
Rob Clark <robdclark@...omium.org>,
David Airlie <airlied@...ux.ie>,
Gerd Hoffmann <kraxel@...hat.com>,
Gurchetan Singh <gurchetansingh@...omium.org>,
Daniel Vetter <daniel@...ll.ch>,
"open list:VIRTIO GPU DRIVER"
<virtualization@...ts.linux-foundation.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] drm/virtio: Add USE_INTERNAL blob flag
On Fri, Feb 18, 2022 at 8:42 AM Chia-I Wu <olvaffe@...il.com> wrote:
>
> On Fri, Feb 18, 2022 at 7:57 AM Rob Clark <robdclark@...il.com> wrote:
> >
> > From: Rob Clark <robdclark@...omium.org>
> >
> > With native userspace drivers in guest, a lot of GEM objects need to be
> > neither shared nor mappable. And in fact making everything mappable
> > and/or sharable results in unreasonably high fd usage in host VMM.
> >
> > Signed-off-by: Rob Clark <robdclark@...omium.org>
> > ---
> > This is for a thing I'm working on, a new virtgpu context type that
> > allows for running native userspace driver in the guest, with a
> > thin shim in the host VMM. In this case, the guest has a lot of
> > GEM buffer objects which need to be neither shared nor mappable.
> >
> > Alternative idea is to just drop the restriction that blob_flags
> > be non-zero. I'm ok with either approach.
> Dropping the restriction sounds better to me.
>
> What is the use case for such a resource? Does the host need to know
> such a resource exists?
There are a bunch of use cases, some internal (like visibility stream
buffers filled during binning pass and consumed during draw pass),
some external (tiled and/or UBWC buffers are never accessed on the
CPU).
In theory, at least currently, drm/virtgpu does not need to know about
them, but there are a lot of places in userspace that expect to have a
gem handle. Longer term, I think I want to extend virtgpu with
MADVISE ioctl so we can track DONTNEED state in guest and only release
buffers when host and/or guest is under memory pressure. For that we
will defn need guest side gem handles
BR,
-R
> >
> > drivers/gpu/drm/virtio/virtgpu_ioctl.c | 7 ++++++-
> > include/uapi/drm/virtgpu_drm.h | 1 +
> > 2 files changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > index 69f1952f3144..92e1ba6b8078 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > @@ -36,7 +36,8 @@
> >
> > #define VIRTGPU_BLOB_FLAG_USE_MASK (VIRTGPU_BLOB_FLAG_USE_MAPPABLE | \
> > VIRTGPU_BLOB_FLAG_USE_SHAREABLE | \
> > - VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE)
> > + VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE | \
> > + VIRTGPU_BLOB_FLAG_USE_INTERNAL)
> >
> > static int virtio_gpu_fence_event_create(struct drm_device *dev,
> > struct drm_file *file,
> > @@ -662,6 +663,10 @@ static int verify_blob(struct virtio_gpu_device *vgdev,
> > params->size = rc_blob->size;
> > params->blob = true;
> > params->blob_flags = rc_blob->blob_flags;
> > +
> > + /* USE_INTERNAL is local to guest kernel, don't past to host: */
> > + params->blob_flags &= ~VIRTGPU_BLOB_FLAG_USE_INTERNAL;
> > +
> > return 0;
> > }
> >
> > diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
> > index 0512fde5e697..62b7483e5c60 100644
> > --- a/include/uapi/drm/virtgpu_drm.h
> > +++ b/include/uapi/drm/virtgpu_drm.h
> > @@ -163,6 +163,7 @@ struct drm_virtgpu_resource_create_blob {
> > #define VIRTGPU_BLOB_FLAG_USE_MAPPABLE 0x0001
> > #define VIRTGPU_BLOB_FLAG_USE_SHAREABLE 0x0002
> > #define VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
> > +#define VIRTGPU_BLOB_FLAG_USE_INTERNAL 0x0008 /* not-mappable, not-shareable */
> > /* zero is invalid blob_mem */
> > __u32 blob_mem;
> > __u32 blob_flags;
> > --
> > 2.34.1
> >
Powered by blists - more mailing lists