lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Dec 2019 15:44:56 +0530
From:   Vinod Koul <vkoul@...nel.org>
To:     Peter Ujfalusi <peter.ujfalusi@...com>
Cc:     robh+dt@...nel.org, nm@...com, ssantosh@...nel.org,
        dan.j.williams@...el.com, dmaengine@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, devicetree@...r.kernel.org,
        linux-kernel@...r.kernel.org, grygorii.strashko@...com,
        lokeshvutla@...com, t-kristo@...com, tony@...mide.com,
        j-keerthy@...com, vigneshr@...com
Subject: Re: [PATCH v7 03/12] dmaengine: doc: Add sections for per descriptor
 metadata support

On 20-12-19, 11:52, Peter Ujfalusi wrote:
> Hi Vinod,
> 
> On 20/12/2019 10.28, Vinod Koul wrote:
> > Hi Peter,
> > 
> > On 09-12-19, 11:43, Peter Ujfalusi wrote:
> > 
> >> +  Optional: per descriptor metadata
> >> +  ---------------------------------
> >> +  DMAengine provides two ways for metadata support.
> >> +
> >> +  DESC_METADATA_CLIENT
> >> +
> >> +    The metadata buffer is allocated/provided by the client driver and it is
> >> +    attached to the descriptor.
> >> +
> >> +  .. code-block:: c
> >> +
> >> +     int dmaengine_desc_attach_metadata(struct dma_async_tx_descriptor *desc,
> >> +				   void *data, size_t len);
> >> +
> >> +  DESC_METADATA_ENGINE
> >> +
> >> +    The metadata buffer is allocated/managed by the DMA driver. The client
> > 
> > and when would it be freed?
> 
> It is not defined as it could be driver dependent, but afaik we have
> defined (which I'm not sure why it is not here or in the code) that in
> DESC_METADATA_ENGINE case the metadata pointer is valid for the client
> between the time it got the desc (via prep call) and the execution of
> the completion callback.
> Iow, DESC_METADATA_ENGINE does not make any sense if the client want to
> receive metadata back and does not provide a callback.

Make sense and once callback completes driver can free it up!
> 
> I will extend the documentation and comment in the code to reflect this.

makes sense, thanks!

> 
> >> +    driver can ask for the pointer, maximum size and the currently used size of
> >> +    the metadata and can directly update or read it.
> >> +
> >> +  .. code-block:: c
> >> +
> >> +     void *dmaengine_desc_get_metadata_ptr(struct dma_async_tx_descriptor *desc,
> >> +		size_t *payload_len, size_t *max_len);
> >> +
> >> +     int dmaengine_desc_set_metadata_len(struct dma_async_tx_descriptor *desc,
> >> +		size_t payload_len);
> >> +
> >> +  Client drivers can query if a given mode is supported with:
> >> +
> >> +  .. code-block:: c
> >> +
> >> +     bool dmaengine_is_metadata_mode_supported(struct dma_chan *chan,
> >> +		enum dma_desc_metadata_mode mode);
> >> +
> >> +  Depending on the used mode client drivers must follow different flow.
> >> +
> >> +  DESC_METADATA_CLIENT
> >> +
> >> +    - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
> >> +      1. prepare the descriptor (dmaengine_prep_*)
> >> +         construct the metadata in the client's buffer
> >> +      2. use dmaengine_desc_attach_metadata() to attach the buffer to the
> >> +         descriptor
> >> +      3. submit the transfer
> > 
> > This is simpler, txn finished the metadata would be freed up right?
> 
> It is up to the client driver what it does with the provided buffer. As
> for what the DMA driver does is not documented as it is not relevant and
> can be different by different HW or SW implementation.

yeah lets document that and the fact the dmaengine driver cant touch it
after the callback
-- 
~Vinod

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ