lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 May 2020 17:22:20 +0300
From:   Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To:     Serge Semin <Sergey.Semin@...kalelectronics.ru>
Cc:     Vinod Koul <vkoul@...nel.org>, Viresh Kumar <vireshk@...nel.org>,
        Dan Williams <dan.j.williams@...el.com>,
        Serge Semin <fancer.lancer@...il.com>,
        Alexey Malahov <Alexey.Malahov@...kalelectronics.ru>,
        Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
        Arnd Bergmann <arnd@...db.de>,
        Rob Herring <robh+dt@...nel.org>, linux-mips@...r.kernel.org,
        devicetree@...r.kernel.org, dmaengine@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 04/10] dmaengine: Introduce max SG list entries
 capability

On Wed, May 27, 2020 at 01:50:15AM +0300, Serge Semin wrote:
> Some devices may lack the support of the hardware accelerated SG list
> entries automatic walking through and execution. In this case a burden of
> the SG list traversal and DMA engine re-initialization lies on the
> DMA engine driver (normally implemented by using a DMA transfer completion
> IRQ to recharge the DMA device with a next SG list entry). But such
> solution may not be suitable for some DMA consumers. In particular SPI
> devices need both Tx and Rx DMA channels work synchronously in order
> to avoid the Rx FIFO overflow. In case if Rx DMA channel is paused for
> some time while the Tx DMA channel works implicitly pulling data into the
> Rx FIFO, the later will be eventually overflown, which will cause the data
> loss. So if SG list entries aren't automatically fetched by the DMA
> engine, but are one-by-one manually selected for execution in the
> ISRs/deferred work/etc., such problem will eventually happen due to the
> non-deterministic latencies of the service execution.
> 
> In order to let the DMA consumer know about the DMA device capabilities
> regarding the hardware accelerated SG list traversal we introduce the
> max_sg_list capability. It is supposed to be initialized by the DMA engine
> driver with 0 if there is no limitation for the number of SG entries
> atomically executed and with non-zero value if there is such constraints,
> so the upper limit is determined by the number set to the property.

Reviewed-by: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>

But see below.

> Suggested-by: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
> Signed-off-by: Serge Semin <Sergey.Semin@...kalelectronics.ru>
> Cc: Alexey Malahov <Alexey.Malahov@...kalelectronics.ru>
> Cc: Thomas Bogendoerfer <tsbogend@...ha.franken.de>
> Cc: Arnd Bergmann <arnd@...db.de>
> Cc: Rob Herring <robh+dt@...nel.org>
> Cc: linux-mips@...r.kernel.org
> Cc: devicetree@...r.kernel.org
> 
> ---
> 
> Changelog v3:
> - This is a new patch created as a result of the discussion with Vinud and
>   Andy in the framework of DW DMA burst and LLP capabilities.
> ---
>  drivers/dma/dmaengine.c   | 1 +
>  include/linux/dmaengine.h | 8 ++++++++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index b332ffe52780..ad56ad58932c 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -592,6 +592,7 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
>  	caps->directions = device->directions;
>  	caps->min_burst = device->min_burst;
>  	caps->max_burst = device->max_burst;
> +	caps->max_sg_nents = device->max_sg_nents;
>  	caps->residue_granularity = device->residue_granularity;
>  	caps->descriptor_reuse = device->descriptor_reuse;
>  	caps->cmd_pause = !!device->device_pause;
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 0c7403b27133..6801200c76b6 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -467,6 +467,9 @@ enum dma_residue_granularity {
>   *	should be checked by controller as well
>   * @min_burst: min burst capability per-transfer
>   * @max_burst: max burst capability per-transfer
> + * @max_sg_nents: max number of SG list entries executed in a single atomic
> + *	DMA tansaction with no intermediate IRQ for reinitialization. Zero
> + *	value means unlimited number if entries.

if -> of ?

>   * @cmd_pause: true, if pause is supported (i.e. for reading residue or
>   *	       for resume later)
>   * @cmd_resume: true, if resume is supported
> @@ -481,6 +484,7 @@ struct dma_slave_caps {
>  	u32 directions;
>  	u32 min_burst;
>  	u32 max_burst;
> +	u32 max_sg_nents;
>  	bool cmd_pause;
>  	bool cmd_resume;
>  	bool cmd_terminate;
> @@ -773,6 +777,9 @@ struct dma_filter {
>   *	should be checked by controller as well
>   * @min_burst: min burst capability per-transfer
>   * @max_burst: max burst capability per-transfer
> + * @max_sg_nents: max number of SG list entries executed in a single atomic
> + *	DMA tansaction with no intermediate IRQ for reinitialization. Zero
> + *	value means unlimited number if entries.

Ditto.

>   * @residue_granularity: granularity of the transfer residue reported
>   *	by tx_status
>   * @device_alloc_chan_resources: allocate resources and return the
> @@ -844,6 +851,7 @@ struct dma_device {
>  	u32 directions;
>  	u32 min_burst;
>  	u32 max_burst;
> +	u32 max_sg_nents;
>  	bool descriptor_reuse;
>  	enum dma_residue_granularity residue_granularity;
>  
> -- 
> 2.26.2
> 

-- 
With Best Regards,
Andy Shevchenko


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ