[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAAfnVBnG2yAkmantf9uCdCf7ur2tkVKgrQCpD78FHBnLTv62VA@mail.gmail.com>
Date: Fri, 12 Apr 2019 17:49:46 -0700
From: Gurchetan Singh <gurchetansingh@...omium.org>
To: Gerd Hoffmann <kraxel@...hat.com>
Cc: ML dri-devel <dri-devel@...ts.freedesktop.org>,
virtio@...ts.oasis-open.org, David Airlie <airlied@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Marc-André Lureau <marcandre.lureau@...il.com>,
Tomeu Vizoso <tomeu.vizoso@...labora.com>,
Jason Wang <jasowang@...hat.com>,
David Airlie <airlied@...ux.ie>,
"open list:VIRTIO CORE, NET AND BLOCK DRIVERS"
<virtualization@...ts.linux-foundation.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@...hat.com> wrote:
>
> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@...hat.com> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > + struct virtio_gpu_ctrl_hdr hdr;
> > > > > + __le32 resource_id;
> > > > > + __le32 format;
> > > > > + __le32 width;
> > > > > + __le32 height;
> > > > > + /* 3d only */
> > > > > + __le32 target;
> > > > > + __le32 bind;
> > > > > + __le32 depth;
> > > > > + __le32 array_size;
> > > > > + __le32 last_level;
> > > > > + __le32 nr_samples;
> > > > > + __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without any
> > > > guest side pages associated with it?
> > >
> > > No. It is not backed at all yet. Workflow would be like this:
> > >
> > > (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > > (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > > (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct? Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...
For Vulkan, VK_EXT_external_memory_dma_buf +
VK_EXT_image_drm_format_modifier should do the trick on Linux devices
that support these extensions.
For GL, I don't see any way forward given the current KVM api.
> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...
Are guest memory regions always write combine, or can they be (read)
cached? Is this something we can control in the virtgpu kernel
driver?
>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 --> vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory --> vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs. Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.
With the current gbm API, metadata is only returned after the
allocation is complete.
We're fine with changing this for minigbm (i.e, having
gbm_get_allocation_metadata(width, height, modifier, *out_metadata).
Not sure what the plan is for Mesa gbm, or the Unix Device Memory
Allocator.
>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here. Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.
We should create VIRTIO_GPU_F_RESOURCE_V2 with what gbm should be in
mind, not what gbm currently is.
>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > + struct virtio_gpu_ctrl_hdr hdr;
> > > > > + __le32 stride[4];
> > > > > + __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?
Do you mean ((offset - stride * height) <= PAGE_SIZE))? If so, no.
For example, do the calculation here with a 256 x 256 NV12 buffer:
https://chromium.googlesource.com/chromiumos/platform/minigbm/+/master/rockchip.c#166
Is the offset generally divisible by PAGE_SIZE? Yes, in the cases I've seen.
On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@...hat.com> wrote:
>
> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@...hat.com> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > + struct virtio_gpu_ctrl_hdr hdr;
> > > > > + __le32 resource_id;
> > > > > + __le32 format;
> > > > > + __le32 width;
> > > > > + __le32 height;
> > > > > + /* 3d only */
> > > > > + __le32 target;
> > > > > + __le32 bind;
> > > > > + __le32 depth;
> > > > > + __le32 array_size;
> > > > > + __le32 last_level;
> > > > > + __le32 nr_samples;
> > > > > + __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without any
> > > > guest side pages associated with it?
> > >
> > > No. It is not backed at all yet. Workflow would be like this:
> > >
> > > (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > > (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > > (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct? Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...
>
> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...
>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 --> vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory --> vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs. Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.
>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here. Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.
>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > + struct virtio_gpu_ctrl_hdr hdr;
> > > > > + __le32 stride[4];
> > > > > + __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?
>
> > Also, do you want to cover the case where the resource is backed by
> > three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?
>
> Good point. I guess we should make memory_id in
> virtio_gpu_cmd_resource_attach_memory an array then,
> so you can specify a different memory region for each plane.
>
> cheers,
> Gerd
>
Powered by blists - more mailing lists