[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aa375470-6297-0a8f-c520-7c3481520990@amd.com>
Date: Tue, 30 May 2023 11:29:45 -0700
From: Lizhi Hou <lizhi.hou@....com>
To: Christophe JAILLET <christophe.jaillet@...adoo.fr>,
<vkoul@...nel.org>, <dmaengine@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
CC: Nishad Saraf <nishads@....com>, <max.zhen@....com>,
<sonal.santan@....com>, <nishad.saraf@....com>
Subject: Re: [PATCH V1 QDMA 1/1] dmaengine: amd: qdma: Add AMD QDMA driver
On 5/27/23 06:33, Christophe JAILLET wrote:
> Le 26/05/2023 à 18:49, Lizhi Hou a écrit :
>> From: Nishad Saraf <nishads@....com>
>>
>> Adds driver to enable PCIe board which uses AMD QDMA (the Queue-based
>> Direct Memory Access) subsystem. For example, Xilinx Alveo V70 AI
>> Accelerator devices.
>> https://www.xilinx.com/applications/data-center/v70.html
>>
>> The primary mechanism to transfer data using the QDMA is for the QDMA
>> engine to operate on instructions (descriptors) provided by the host
>> operating system. Using the descriptors, the QDMA can move data in both
>> the Host to Card (H2C) direction, or the Card to Host (C2H) direction.
>> The QDMA provides a per-queue basis option whether DMA traffic goes
>> to an AXI4 memory map (MM) interface or to an AXI4-Stream interface.
>>
>> The hardware detail is provided by
>> https://docs.xilinx.com/r/en-US/pg302-qdma
>>
>> Implements dmaengine APIs to support MM DMA transfers.
>> - probe the available DMA channels
>> - use dma_slave_map for channel lookup
>> - use virtual channel to manage dmaengine tx descriptors
>> - implement device_prep_slave_sg callback to handle host scatter gather
>> list
>> - implement descriptor metadata operations to set device address for DMA
>> transfer
>>
>> Signed-off-by: Nishad Saraf <nishads@....com>
>> Signed-off-by: Lizhi Hou <lizhi.hou@....com>
>> ---
>
> [...]
>
>> +/**
>> + * qdma_alloc_queue_resources() - Allocate queue resources
>> + * @chan: DMA channel
>> + */
>> +static int qdma_alloc_queue_resources(struct dma_chan *chan)
>> +{
>> + struct qdma_queue *queue = to_qdma_queue(chan);
>> + struct qdma_device *qdev = queue->qdev;
>> + struct qdma_ctxt_sw_desc desc;
>> + size_t size;
>> + int ret;
>> +
>> + ret = qdma_clear_queue_context(queue);
>> + if (ret)
>> + return ret;
>> +
>> + size = queue->ring_size * QDMA_MM_DESC_SIZE;
>> + queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size,
>> + &queue->dma_desc_base,
>> + GFP_KERNEL | __GFP_ZERO);
>
> Nit: Useless (but harmless).
> AFAIK, dma_alloc_coherent() always returned some zeroed memory.
> (should you remove the __GFP_ZERO, there is another usage below)
Sure. I will remove __GFP_ZERO.
>
>> + if (!queue->desc_base) {
>> + qdma_err(qdev, "Failed to allocate descriptor ring");
>> + return -ENOMEM;
>> + }
>> +
>
> [...]
>
>> +/**
>> + * struct qdma_platdata - Platform specific data for QDMA engine
>> + * @max_mm_channels: Maximum number of MM DMA channels in each
>> direction
>> + * @device_map: DMA slave map
>> + * @irq_index: The index of first IRQ
>> + */
>> +struct qdma_platdata {
>> + u32 max_mm_channels;
>> + struct dma_slave_map *device_map;
>> + u32 irq_index;
>> +};
>
> Noob question: this struct is only retrieved from dev_get_platdata(),
> but there is no dev_set_platdata().
> How the link is done? How this structure is filled?
The platdata is generated with platform device. For example, a PCI
driver may do
struct qdma_platdata data = { .... }
platform_device_register_resndata(.., &data, ...)
>
>
> Should it mater, keeping the 2 u32 one after the other, would avoid a
> hole.
Sure. I will fix this.
Thanks,
Lizhi
>
> CJ
>
>> +
>> +#endif /* _PLATDATA_AMD_QDMA_H */
>
Powered by blists - more mailing lists