[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABEDWGxBxmiKjoPUSUaUBXUhKkUTXVX0U9ooRou8tcWJojb52g@mail.gmail.com>
Date: Fri, 31 May 2019 11:16:46 -0700
From: Alan Mikhak <alan.mikhak@...ive.com>
To: Kishon Vijay Abraham I <kishon@...com>
Cc: Gustavo Pimentel <Gustavo.Pimentel@...opsys.com>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"lorenzo.pieralisi@....com" <lorenzo.pieralisi@....com>,
"arnd@...db.de" <arnd@...db.de>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"jingoohan1@...il.com" <jingoohan1@...il.com>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"wen.yang99@....com.cn" <wen.yang99@....com.cn>,
"kjlu@....edu" <kjlu@....edu>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"palmer@...ive.com" <palmer@...ive.com>,
"paul.walmsley@...ive.com" <paul.walmsley@...ive.com>,
Vinod Koul <vkoul@...nel.org>
Subject: Re: [PATCH] PCI: endpoint: Add DMA to Linux PCI EP Framework
On Thu, May 30, 2019 at 10:08 PM Kishon Vijay Abraham I <kishon@...com> wrote:
> Hi Alan,
> >
> > Hi Kishon,
> >
> > I have some improvements in mind for a v2 patch in response to
> > feedback from Gustavo Pimentel that the current implementation is HW
> > specific. I hesitate from submitting a v2 patch because it seems best
> > to seek comment on possible directions this may be taking.
> >
> > One alternative is to wait for or modify test functions in
> > pci-epf-test.c to call DMAengine client APIs, if possible. I imagine
> > pci-epf-test.c test functions would still allocate the necessary local
> > buffer on the endpoint side for the same canned tests for everyone to
> > use. They would prepare the buffer in the existing manner by filling
> > it with random bytes and calculate CRC in the case of a write test.
> > However, they would then initiate DMA operations by using DMAengine
> > client APIs in a generic way instead of calling memcpy_toio() and
> > memcpy_fromio(). They would post-process the buffer in the existing
>
> No, you can't remove memcpy_toio/memcpy_fromio APIs. There could be platforms
> without system DMA or they could have system DMA but without MEMCOPY channels
> or without DMA in their PCI controller.
I agree. I wouldn't remove memcpy_toio/fromio. That is the reason this
patch introduces the '-d' flag for pcitest to communicate that user
intent across the PCIe bus to pci-epf-test so the endpoint can
initiate the transfer using either memcpy_toio/fromio or DMA.
> > manner such as the checking for CRC in the case of a read test.
> > Finally, they would release the resources and report results back to
> > the user of pcitest across the PCIe bus through the existing methods.
> >
> > Another alternative I have in mind for v2 is to change the struct
> > pci_epc_dma that this patch added to pci-epc.h from the following:
> >
> > struct pci_epc_dma {
> > u32 control;
> > u32 size;
> > u64 sar;
> > u64 dar;
> > };
> >
> > to something similar to the following:
> >
> > struct pci_epc_dma {
> > size_t size;
> > void *buffer;
> > int flags;
> > };
> >
> > The 'flags' field can be a bit field or separate boolean values to
> > specify such things as linked-list mode vs single-block, etc.
> > Associated #defines would be removed from pci-epc.h to be replaced if
> > needed with something generic. The 'size' field specifies the size of
> > DMA transfer that can fit in the buffer.
>
> I still have to look closer into your DMA patch but linked-list mode or single
> block mode shouldn't be an user select-able option but should be determined by
> the size of transfer.
Please consider the following when taking a closer look at this patch.
In my specific use case, I need to verify that any valid block size,
including a one byte transfer, can be transferred across the PCIe bus
by memcpy_toio/fromio() or by DMA either as a single block or as
linked-list. That is why, instead of deciding based on transfer size,
this patch introduces the '-L' flag for pcitest to communicate the
user intent across the PCIe bus to pci-epf-test so the endpoint can
initiate the DMA transfer using a single block or in linked-list mode.
When user issues 'pcitest -r' to perform a read buffer test,
pci-epf-test calls pci_epf_test_write() which uses memcpy_toio(). As
before, a read from the user point of view is a write from the
endpoint point of view.
When user issues 'pcitest -r -d', pci-epf-test calls a new function
pci_epf_test_write_dma() to initiate a single block DMA transfer.
When user issues 'pcitest -r -d -L', pci-epf-test calls a new function
pci_epf_test_write_dma_list() to initiate a linked-list DMA transfer.
The '-d' and '-L' flags also apply to the '-w' flag when the user
performs a write buffer test. The user can specify any valid transfer
size for any of the above examples using the '-s' flag as before.
> > That way the dma test functions in pci-epf-test.c can simply kmalloc
> > and prepare a local buffer on the endpoint side for the DMA transfer
> > and pass its pointer down the stack using the 'buffer' field to lower
> > layers. This would allow different PCIe controller drivers to
> > implement DMA or not according to their needs. Each implementer can
> > decide to use DMAengine client API, which would be preferable, or
> > directly read or write to DMA hardware registers to suit their needs.
>
> yes, that would be my preferred method as well. In fact I had implemented
> pci_epf_tx() in [1], as a way for pci-epf-test to pass buffer address to
> endpoint controller driver. I had also implemented helpers for platforms using
> system DMA (i.e uses DMAengine).
>
> Thanks
> Kishon
>
> [1] ->
> http://git.ti.com/cgit/cgit.cgi/ti-linux-kernel/ti-linux-kernel.git/tree/drivers/pci/endpoint/pci-epf-core.c?h=ti-linux-4.19.y
> >
> > I would appreciate feedback and comment on such choices as part of this review.
Thanks for all your comments and providing the link to your
implementation of pci_epf_tx() in [1] above. It clarifies a lot and
provides a very useful reference.
Regards,
Alan Mikhak
Powered by blists - more mailing lists