[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3kqkdegy56hr7ghwrg42h3rjc3hcpgfz4jkqdhri2j2qjg3crx@cpuy2ehukzdp>
Date: Mon, 25 Sep 2023 19:06:56 +0300
From: Serge Semin <fancer.lancer@...il.com>
To: Köry Maincent <kory.maincent@...tlin.com>
Cc: Cai Huoqing <cai.huoqing@...ux.dev>,
Manivannan Sadhasivam <mani@...nel.org>,
Vinod Koul <vkoul@...nel.org>,
Gustavo Pimentel <Gustavo.Pimentel@...opsys.com>,
dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Herve Codina <herve.codina@...tlin.com>
Subject: Re: [PATCH 4/9] dmaengine: dw-edma: HDMA: Add memory barrier before
starting the DMA transfer in remote setup
Hi Köry
On Tue, Sep 12, 2023 at 10:52:10AM +0200, Köry Maincent wrote:
> Hello Serge,
>
> I am back with an hardware design answer:
> > "Even though the PCIe itself respects the transactions ordering, the
> > AXI bus does not have an end-to-end completion acknowledgement (it
> > terminates at the PCIe EP boundary with bus), and does not guaranteed
> > ordering if accessing different destinations on the Bus. So, an access to LL
> > could be declared complete even though the transactions is still being
> > pipelined in the AXI Bus. (a dozen or so clocks, I can give an accurate
> > number if needed)
> >
> > The access to DMA registers is done through BAR0 “rolling”
> > so the transaction does not actually go out on the AXI bus and
> > looped-back to PCIe DMA, rather it stays inside the PCIe EP.
> >
> > For the above reasons, hypothetically, there’s a chance that even if the DMA
> > LL is accessed before the DM DB from PCIe RC side, the DB could be updated
> > before the LL in local memory."
Thanks for the detailed explanation. It doesn't firmly point out to
the root cause of the problem but mainly confirms a possible race
condition inside the remote PCIe device itself. That's what I meant in
my suggestion 3.
>
> On Thu, 22 Jun 2023 19:22:20 +0300
> Serge Semin <fancer.lancer@...il.com> wrote:
>
> > If we get assured that hardware with such problem exists (if you'll get
> > confirmation about the supposition 3. above) then we'll need to
> > activate your trick for that hardware only. Adding dummy reads for all
> > the remote eDMA setups doesn't look correct since it adds additional
> > delay to the execution path and especially seeing nobody has noticed
> > and reported such problem so far (for instance Gustavo didn't see the
> > problem on his device otherwise he would have fixed it).
> >
> > So if assumption 3. is correct then I'd suggest the next
> > implementation: add a new dw_edma_chip_flags flag defined (a.k.a
> > DW_EDMA_SLOW_MEM), have it specified via the dw_edma_chip.flags field
> > in the Akida device probe() method and activate your trick only if
> > that flag is set.
>
> The flag you suggested is about slow memory write but as said above the issue
> comes from the AXI bus and not the memory.
AXI bus is a bus what is utilized to access the LL-memory in your
case. From the CPU perspective it's the same since the access time
depends on the both parts performance.
> I am wondering why you don't see
> this issue.
Well, in my case the DW PCIe eDMA controller is _locally_ implemented.
So it' CSRs and the LL-memory are accessible from the CPU side over a
system interconnect. The LL-memory is allocated from the system RAM
(CPU<->_AXI IC_<->AXI<->DDR<->RAM), while the DW PCIe CSRs are just
the memory-mapped IO space (CPU<->_AXI IC_<->APB<->AXI<->DBI<->CDM).
So in my case:
1. APB is too slow to be updated before the Linked-List data.
2. MMIO accessors (writel()/readl()/etc) are defined in a way so all
the memory updates (normal memory writes and reads) are supposed to be
completed before any further MMIO accesses.
So the ordering is mainly assured by 2 in case of the local DW PCIe
eDMA implementation.
Your configuration is different. You have the DW PCIe eDMA controller
implemented remotely. In that case you have both CSRs and Linked-list
memory accessible over a chain like:
CPU<->_Some IC_<->AXI/Native<->Some PCIe Host<->... PCIe bus ... <-+
|
+------------------------------------------------------------------+
|
+->DW eDMA
+> BARx<->CDM CSRs
+> BARy<->AHB/AXI/APB/etc<->Some SRAM
^
|
+-----------------------+
|
AFAICS a race condition happens due to this + bus being too slow. So
in case if the LL and CSRs IO writes are performed right one after
another with no additional delays or syncs in between them, then
indeed the later one can be finished earlier than the former one.
> If I understand well it should be present on all IP as the DMA
> register is internal to the IP and the LL memory is external through AXI bus.
> Did you stress your IP? On my side it appears with lots of operation using
> several (at least 3) thread through 2 DMA channels.
I didn't stress it up with such test. But AFAICS from a normal systems
implementations the problem isn't relevant for the locally accessible
DW PCIe eDMA controllers otherwise at the very least it would have
popped up in many other places in kernel.
What I meant in my previous message was that it was strange Gustavo
(the original driver developer) didn't spot the problem you were
referring to. He was the only one having the Remote DW eDMA hardware
at hands to perform such tests. Anyway seeing we've got to some
understanding around the problem and since based on the DW PCIe RP/EP
internals the CSRs and Application memory are indeed normally accessed
over the different buses, let's fix the problem as you suggest with
just using the DW_EDMA_CHIP_LOCAL flag. But please:
1. Fix it for both HDMA and EDMA controllers.
2. Create functions like dw_edma_v0_sync_ll_data() and
dw_hdma_v0_sync_ll_data() between the dw_Xdma_v0_core_write_chunk()
and dw_Xdma_v0_core_start() methods, which would perform the
dummy-read from the passed LL-chunk in order to sync the remote memory
writes.
3. Based on all our discussions add a saner comment to these methods
about why the dummy-read is needed for the remote DW eDMA setups.
-Serge(y)
>
> Köry
Powered by blists - more mailing lists