lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2575b93d-f236-1c52-1633-ed51e29141b5@ti.com>
Date:   Thu, 30 Aug 2018 11:57:50 +0300
From:   Peter Ujfalusi <peter.ujfalusi@...com>
To:     Vinod <vkoul@...nel.org>
CC:     <dan.j.williams@...el.com>, <dmaengine@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <lars@...afoo.de>,
        <radheys@...inx.com>
Subject: Re: [PATCH] dmaengine: Add metadata_ops for dma_async_tx_descriptor

Vinod,

On 2018-08-29 19:22, Vinod wrote:
>>>> + *   2. use dmaengine_desc_attach_metadata() to attach the buffer to the
>>>> + *	descriptor
>>>> + *   3. submit the transfer
>>>> + * - DMA_DEV_TO_MEM:
>>>> + *   1. prepare the descriptor (dmaengine_prep_*)
>>>> + *   2. use dmaengine_desc_attach_metadata() to attach the buffer to the
>>>> + *	descriptor
>>>> + *   3. submit the transfer
>>>> + *   4. when the transfer is completed, the metadata should be available in the
>>>> + *	attached buffer
>>>
>>> I guess this is good to be moved into Documentation
>>
>> Should I create a new file for metadata? I guess it would make sense as the
>> information is for both clients and engines.
> 
> Hmm not sure, lets see how it looks as entries in these files, detailing
> roles of clients and providers

Update both client and provider documentation with tailoring the content
for the audience?

>>> also we dont allow this for memcpy txn?
>>
>> I have not thought about that, but if I think about it it should be along the
>> same lines as MEM_TO_DEV.
>> I'll add the MEM_TO_MEM as well to the documentation.
> 
> Okay and lets not implement it then..

I'm not going to implement it, but the documentation could add that if
metadata is used for MEM_TO_MEM then it is the same use case as with
MEM_TO_DEV.

> 
>>
>>>> + *
>>>> + * @DESC_METADATA_ENGINE - the metadata buffer is allocated/managed by the DMA
>>>> + *  driver. The client driver can ask for the pointer, maximum size and the
>>>> + *  currently used size of the metadata and can directly update or read it.
>>>> + *  dmaengine_desc_get_metadata_ptr() and dmaengine_desc_set_metadata_len() is
>>>> + *  provided as helper functions.
>>>> + *
>>>> + * Client drivers interested to use this mode can follow:
>>>> + * - DMA_MEM_TO_DEV:

Here, add DMA_MEM_TO_MEM

>>>> + *   1. prepare the descriptor (dmaengine_prep_*)
>>>> + *   2. use dmaengine_desc_get_metadata_ptr() to get the pointer to the engine's
>>>> + *	metadata area
>>>> + *   3. update the metadata at the pointer
>>>> + *   4. use dmaengine_desc_set_metadata_len()  to tell the DMA engine the amount
>>>> + *	of data the client has placed into the metadata buffer
>>>> + *   5. submit the transfer
>>>> + * - DMA_DEV_TO_MEM:
>>>> + *   1. prepare the descriptor (dmaengine_prep_*)
>>>> + *   2. submit the transfer
>>>> + *   3. on transfer completion, use dmaengine_desc_get_metadata_ptr() to get the
>>>> + *	pointer to the engine's metadata are
>>>> + *   4. Read out the metadate from the pointer
>>>> + *
>>>> + * Note: the two mode is not compatible and clients must use one mode for a
>>>> + * descriptor.
>>>> + */
>>>> +enum dma_desc_metadata_mode {
>>>> +	DESC_METADATA_CLIENT = (1 << 0),
>>>> +	DESC_METADATA_ENGINE = (1 << 1),
>>>
>>> BIT(x)
>>
>> OK, I followed what we have in the header to not mix (1 << x) and BIT(x)
> 
> yeah lets update :)

OK.

>>>> +static inline int _desc_check_and_set_metadata_mode(
>>>
>>> why does this need to start with _ ?
>>
>> To scare people to use in client code ;)
> 
> Lets not expose to them :D

Sure, if the code moves to dmaengine.c it is granted.

>>> Also I would like to see a use :-) before further comments.
>>
>> You mean the DMA driver and at least one client?
> 
> DMA driver to _at_least_ start with. Client even better

Hrm, I can send the DMA driver as RFC (not to merge, will not compile)
but I need to do some excess cover letter and documentation since the
UDMA is 'just' a piece in the data movement architecture and need to
explain couple of things around it.

I will need couple of days for that for sure.

>> I have the DMA driver in my public facing branch [1], but it is not an easy
>> read with it's close to 4k loc.
> 
> It doesnt exist :P

In this sense it does not, agree.

>> The client is not in my branch and it is actually using an older version of
>> the metadata support.
>>
>> The problem is that I don't know when I will be able to send the driver for
>> review as all of this is targeting a brand new SoC (AM654) with completely new
>> data movement architecture. There are lots of dependencies still need to be
>> upstreamed before I can send something which at least compiles.
>>
>> I can offer snippets from the client driver, if that is good enough or a link
>> to the public tree where it can be accessed, but it is not going to go
>> upstream before the DMA driver.
> 
> TBH that's not going to help much, lets come back to it when you need
> this upstream.

One of the reason I have sent the metadata support early is because
Radhey was looking for similar thing for xilinx_dma and I already had
the generic implementation of it which suits his case.

I was planning to send the metadata support along with the DMA driver
(and other core changes, new features).

If not for me, then for Radhey's stake can the metadata support be
considered as stand alone for now?

I will send v2 as soon as I have it ready and I will send either v3 with
the k3-udma DMA driver (UDMA drivers as not for merge) or standalone
UDMA driver as RFC and for reference.

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ