lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 27 May 2023 15:33:30 +0200
From:   Christophe JAILLET <christophe.jaillet@...adoo.fr>
To:     Lizhi Hou <lizhi.hou@....com>, vkoul@...nel.org,
        dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     Nishad Saraf <nishads@....com>, max.zhen@....com,
        sonal.santan@....com, nishad.saraf@....com
Subject: Re: [PATCH V1 QDMA 1/1] dmaengine: amd: qdma: Add AMD QDMA driver

Le 26/05/2023 à 18:49, Lizhi Hou a écrit :
> From: Nishad Saraf <nishads@....com>
> 
> Adds driver to enable PCIe board which uses AMD QDMA (the Queue-based
> Direct Memory Access) subsystem. For example, Xilinx Alveo V70 AI
> Accelerator devices.
>      https://www.xilinx.com/applications/data-center/v70.html
> 
> The primary mechanism to transfer data using the QDMA is for the QDMA
> engine to operate on instructions (descriptors) provided by the host
> operating system. Using the descriptors, the QDMA can move data in both
> the Host to Card (H2C) direction, or the Card to Host (C2H) direction.
> The QDMA provides a per-queue basis option whether DMA traffic goes
> to an AXI4 memory map (MM) interface or to an AXI4-Stream interface.
> 
> The hardware detail is provided by
>      https://docs.xilinx.com/r/en-US/pg302-qdma
> 
> Implements dmaengine APIs to support MM DMA transfers.
> - probe the available DMA channels
> - use dma_slave_map for channel lookup
> - use virtual channel to manage dmaengine tx descriptors
> - implement device_prep_slave_sg callback to handle host scatter gather
>    list
> - implement descriptor metadata operations to set device address for DMA
>    transfer
> 
> Signed-off-by: Nishad Saraf <nishads@....com>
> Signed-off-by: Lizhi Hou <lizhi.hou@....com>
> ---

[...]

> +/**
> + * qdma_alloc_queue_resources() - Allocate queue resources
> + * @chan: DMA channel
> + */
> +static int qdma_alloc_queue_resources(struct dma_chan *chan)
> +{
> +	struct qdma_queue *queue = to_qdma_queue(chan);
> +	struct qdma_device *qdev = queue->qdev;
> +	struct qdma_ctxt_sw_desc desc;
> +	size_t size;
> +	int ret;
> +
> +	ret = qdma_clear_queue_context(queue);
> +	if (ret)
> +		return ret;
> +
> +	size = queue->ring_size * QDMA_MM_DESC_SIZE;
> +	queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size,
> +					      &queue->dma_desc_base,
> +					      GFP_KERNEL | __GFP_ZERO);

Nit: Useless (but harmless).
AFAIK, dma_alloc_coherent() always returned some zeroed memory.
(should you remove the __GFP_ZERO, there is another usage below)

> +	if (!queue->desc_base) {
> +		qdma_err(qdev, "Failed to allocate descriptor ring");
> +		return -ENOMEM;
> +	}
> +

[...]

> +/**
> + * struct qdma_platdata - Platform specific data for QDMA engine
> + * @max_mm_channels: Maximum number of MM DMA channels in each direction
> + * @device_map: DMA slave map
> + * @irq_index: The index of first IRQ
> + */
> +struct qdma_platdata {
> +	u32			max_mm_channels;
> +	struct dma_slave_map	*device_map;
> +	u32			irq_index;
> +};

Noob question: this struct is only retrieved from dev_get_platdata(), 
but there is no dev_set_platdata().
How the link is done? How this structure is filled?


Should it mater, keeping the 2 u32 one after the other, would avoid a hole.

CJ

> +
> +#endif /* _PLATDATA_AMD_QDMA_H */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ