[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SA1PR12MB812027AB20BEE7C28F30A5FE9509A@SA1PR12MB8120.namprd12.prod.outlook.com>
Date: Thu, 11 Sep 2025 11:42:31 +0000
From: "Verma, Devendra" <Devendra.Verma@....com>
To: Bjorn Helgaas <helgaas@...nel.org>
CC: "bhelgaas@...gle.com" <bhelgaas@...gle.com>, "mani@...nel.org"
<mani@...nel.org>, "vkoul@...nel.org" <vkoul@...nel.org>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Simek,
Michal" <michal.simek@....com>
Subject: RE: [PATCH 2/2] dmaengine: dw-edma: Add non-LL mode
[AMD Official Use Only - AMD Internal Distribution Only]
Thank you Bjorn for your inputs!
Please check my comments inline.
Regards,
Devendra
> -----Original Message-----
> From: Bjorn Helgaas <helgaas@...nel.org>
> Sent: Wednesday, September 10, 2025 23:27
> To: Verma, Devendra <Devendra.Verma@....com>
> Cc: bhelgaas@...gle.com; mani@...nel.org; vkoul@...nel.org;
> dmaengine@...r.kernel.org; linux-pci@...r.kernel.org; linux-
> kernel@...r.kernel.org; Simek, Michal <michal.simek@....com>
> Subject: Re: [PATCH 2/2] dmaengine: dw-edma: Add non-LL mode
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Sep 10, 2025 at 12:30:39PM +0000, Verma, Devendra wrote:
> > > From: Bjorn Helgaas <helgaas@...nel.org>
>
> [redundant headers removed]
>
> > > On Fri, Sep 05, 2025 at 03:46:59PM +0530, Devendra K Verma wrote:
> > > > AMD MDB IP supports Linked List (LL) mode as well as non-LL mode.
> > > > The current code does not have the mechanisms to enable the DMA
> > > > transactions using the non-LL mode. The following two cases are
> > > > added with this patch:
>
> > > > +static u64 dw_edma_get_phys_addr(struct pci_dev *pdev,
> > > > + struct dw_edma_pcie_data *pdata,
> > > > + enum pci_barno bar) {
> > > > + if (pdev->vendor == PCI_VENDOR_ID_XILINX)
> > > > + return pdata->phys_addr;
> > > > + return pci_bus_address(pdev, bar);
> > >
> > > This doesn't seem right. pci_bus_address() returns pci_bus_addr_t,
> > > so pdata->phys_addr should also be a pci_bus_addr_t, and the
> > > function should return pci_bus_addr_t.
> > >
> > > A pci_bus_addr_t is not a "phys_addr"; it is an address that is
> > > valid on the PCI side of a PCI host bridge, which may be different
> > > than the CPU physical address on the CPU side of the bridge because
> > > of things like IOMMUs.
> > >
> > > Seems like the struct dw_edma_region.paddr should be renamed to
> > > something like "bus_addr" and made into a pci_bus_addr_t.
> >
> > In case of AMD, it is not an address that is accessible from host via
> > PCI, it is the device side DDR offset of physical address which is not
> > known to host,that is why the VSEC capability is used to let know host
> > of the DDR offset to correctly programming the LLP of DMA controller.
> > Without programming the LLP controller will not know from where to
> > start reading the LL for DMA processing. DMA controller requires the
> > physical address of LL present on its side of DDR.
>
> I guess "device side DDR offset" means this Xilinx device has some DDR internal to
> the PCI device, and the CPU cannot access it via a BAR?
>
The host can access the DDR internal to the PCI device via BAR, but it involves
an iATU translation. The host can use the virtual / physical address to access that DDR.
The issue is not with the host rather DMA controller which does not understand the
physical address provided by the host, eg, the address returned
by pci_bus_addr(pdev, barno).
> But you need addresses inside that device DDR even though the CPU can't access
> it, and the VSEC gives you the base address of the DDR?
>
> This makes me wonder about how dw_edma_region is used elsewhere because
> some of those places seem like they assume the CPU *can* access this area.
>
> dw_pcie_edma_ll_alloc() uses dmam_alloc_coherent(), which allocates RAM and
> gives you a CPU virtual address (ll->vaddr.mem) and a DMA address (ll->paddr).
> dw_edma_pcie_probe() will later overwrite the
> ll->paddr with the DDR offset based on VSEC.
>
> But it sounds like ll->vaddr.mem is useless for Xilinx devices since the CPU can't
> access the DDR?
>
> If the CPU can't use ll->vaddr.mem, what happens in places like
> dw_hdma_v0_write_ll_data() where we access it?
>
> Bjorn
As explained above, the host can access the device side internal DDR via one of
the BARs but involves some translation (iATU) configuration on the device side.
The issue is with the DMA controller which does not understand the host side
virt / phys addresses thus need to be programmed correctly to the device side
DDR offset which is a valid physical address on the device side. Host needs the
Information of the device side DDR offset to program the LLP register using which
DMA controller starts reading the DDR on its side.
Hope this clarifies.
Powered by blists - more mailing lists