[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKMK7uFOfoxbD2Z5mb-qHFnUe5rObGKQ6Ygh--HSH9M=9bziGg@mail.gmail.com>
Date: Mon, 21 Jun 2021 14:28:48 +0200
From: Daniel Vetter <daniel.vetter@...ll.ch>
To: Oded Gabbay <ogabbay@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>,
linux-rdma <linux-rdma@...r.kernel.org>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@...r.kernel.org>, Doug Ledford <dledford@...hat.com>,
"airlied@...il.com" <airlied@...il.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Greg KH <gregkh@...uxfoundation.org>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
Gal Pressman <galpress@...zon.com>, sleybo@...zon.com,
dri-devel <dri-devel@...ts.freedesktop.org>,
Tomer Tayar <ttayar@...ana.ai>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@...ts.linaro.org>,
amd-gfx list <amd-gfx@...ts.freedesktop.org>,
Alex Deucher <alexander.deucher@....com>,
Leon Romanovsky <leonro@...dia.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF
On Fri, Jun 18, 2021 at 2:36 PM Oded Gabbay <ogabbay@...nel.org> wrote:
> User process might want to share the device memory with another
> driver/device, and to allow it to access it over PCIe (P2P).
>
> To enable this, we utilize the dma-buf mechanism and add a dma-buf
> exporter support, so the other driver can import the device memory and
> access it.
>
> The device memory is allocated using our existing allocation uAPI,
> where the user will get a handle that represents the allocation.
>
> The user will then need to call the new
> uAPI (HL_MEM_OP_EXPORT_DMABUF_FD) and give the handle as a parameter.
>
> The driver will return a FD that represents the DMA-BUF object that
> was created to match that allocation.
>
> Signed-off-by: Oded Gabbay <ogabbay@...nel.org>
> Reviewed-by: Tomer Tayar <ttayar@...ana.ai>
Mission acomplished, we've gone full circle, and the totally-not-a-gpu
driver is now trying to use gpu infrastructure. And seems to have
gained vram meanwhile too. Next up is going to be synchronization
using dma_fence so you can pass buffers back&forth without stalls
among drivers.
Bonus points for this being at v3 before it shows up on dri-devel and
cc's dma-buf folks properly (not quite all, I added the missing
people).
I think we roughly have two options here
a) Greg continues to piss off dri-devel folks while trying to look
cute&cuddly and steadfastly claiming that this accelator doesn't work
like any of the other accelerator drivers we have in drivers/gpu/drm.
All while the driver ever more looks like one of these other accel
drivers.
b) We finally do what we should have done years back and treat this as
a proper driver submission and review it on dri-devel instead of
sneaking it in through other channels because the merge criteria
dri-devel has are too onerous and people who don't have experience
with accel stacks for the past 20 years or so don't like them.
"But this probably means a new driver and big disruption!"
Not my problem, I'm not the dude who has to come up with an excuse for
this because I didn't merge the driver in the first place. I do get to
throw a "we all told you so" in though, but that's not helping.
Also I'm wondering which is the other driver that we share buffers
with. The gaudi stuff doesn't have real struct pages as backing
storage, it only fills out the dma_addr_t. That tends to blow up with
other drivers, and the only place where this is guaranteed to work is
if you have a dynamic importer which sets the allow_peer2peer flag.
Adding maintainers from other subsystems who might want to chime in
here. So even aside of the big question as-is this is broken.
Currently only 2 drivers set allow_peer2peer, so those are the only
ones who can consume these buffers from device memory. Pinging those
folks specifically.
Doug/Jason from infiniband: Should we add linux-rdma to the dma-buf
wildcard match so that you can catch these next time around too? At
least when people use scripts/get_maintainers.pl correctly. All the
other subsystems using dma-buf are on there already (dri-devel,
linux-media and linaro-mm-sig for android/arm embedded stuff).
Cheers, Daniel
> ---
> include/uapi/misc/habanalabs.h | 28 +++++++++++++++++++++++++++-
> 1 file changed, 27 insertions(+), 1 deletion(-)
>
> diff --git a/include/uapi/misc/habanalabs.h b/include/uapi/misc/habanalabs.h
> index a47a731e4527..aa3d8e0ba060 100644
> --- a/include/uapi/misc/habanalabs.h
> +++ b/include/uapi/misc/habanalabs.h
> @@ -808,6 +808,10 @@ union hl_wait_cs_args {
> #define HL_MEM_OP_UNMAP 3
> /* Opcode to map a hw block */
> #define HL_MEM_OP_MAP_BLOCK 4
> +/* Opcode to create DMA-BUF object for an existing device memory allocation
> + * and to export an FD of that DMA-BUF back to the caller
> + */
> +#define HL_MEM_OP_EXPORT_DMABUF_FD 5
>
> /* Memory flags */
> #define HL_MEM_CONTIGUOUS 0x1
> @@ -878,11 +882,26 @@ struct hl_mem_in {
> /* Virtual address returned from HL_MEM_OP_MAP */
> __u64 device_virt_addr;
> } unmap;
> +
> + /* HL_MEM_OP_EXPORT_DMABUF_FD */
> + struct {
> + /* Handle returned from HL_MEM_OP_ALLOC. In Gaudi,
> + * where we don't have MMU for the device memory, the
> + * driver expects a physical address (instead of
> + * a handle) in the device memory space.
> + */
> + __u64 handle;
> + /* Size of memory allocation. Relevant only for GAUDI */
> + __u64 mem_size;
> + } export_dmabuf_fd;
> };
>
> /* HL_MEM_OP_* */
> __u32 op;
> - /* HL_MEM_* flags */
> + /* HL_MEM_* flags.
> + * For the HL_MEM_OP_EXPORT_DMABUF_FD opcode, this field holds the
> + * DMA-BUF file/FD flags.
> + */
> __u32 flags;
> /* Context ID - Currently not in use */
> __u32 ctx_id;
> @@ -919,6 +938,13 @@ struct hl_mem_out {
>
> __u32 pad;
> };
> +
> + /* Returned in HL_MEM_OP_EXPORT_DMABUF_FD. Represents the
> + * DMA-BUF object that was created to describe a memory
> + * allocation on the device's memory space. The FD should be
> + * passed to the importer driver
> + */
> + __u64 fd;
> };
> };
>
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists