[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250910175637.GA1541229@bhelgaas>
Date: Wed, 10 Sep 2025 12:56:37 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: "Verma, Devendra" <Devendra.Verma@....com>
Cc: "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"mani@...nel.org" <mani@...nel.org>,
"vkoul@...nel.org" <vkoul@...nel.org>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Simek, Michal" <michal.simek@....com>
Subject: Re: [PATCH 2/2] dmaengine: dw-edma: Add non-LL mode
On Wed, Sep 10, 2025 at 12:30:39PM +0000, Verma, Devendra wrote:
> > From: Bjorn Helgaas <helgaas@...nel.org>
[redundant headers removed]
> > On Fri, Sep 05, 2025 at 03:46:59PM +0530, Devendra K Verma wrote:
> > > AMD MDB IP supports Linked List (LL) mode as well as non-LL mode.
> > > The current code does not have the mechanisms to enable the DMA
> > > transactions using the non-LL mode. The following two cases are added
> > > with this patch:
> > > +static u64 dw_edma_get_phys_addr(struct pci_dev *pdev,
> > > + struct dw_edma_pcie_data *pdata,
> > > + enum pci_barno bar) {
> > > + if (pdev->vendor == PCI_VENDOR_ID_XILINX)
> > > + return pdata->phys_addr;
> > > + return pci_bus_address(pdev, bar);
> >
> > This doesn't seem right. pci_bus_address() returns
> > pci_bus_addr_t, so pdata->phys_addr should also be a
> > pci_bus_addr_t, and the function should return pci_bus_addr_t.
> >
> > A pci_bus_addr_t is not a "phys_addr"; it is an address that is
> > valid on the PCI side of a PCI host bridge, which may be different
> > than the CPU physical address on the CPU side of the bridge
> > because of things like IOMMUs.
> >
> > Seems like the struct dw_edma_region.paddr should be renamed to
> > something like "bus_addr" and made into a pci_bus_addr_t.
>
> In case of AMD, it is not an address that is accessible from host
> via PCI, it is the device side DDR offset of physical address which
> is not known to host,that is why the VSEC capability is used to let
> know host of the DDR offset to correctly programming the LLP of DMA
> controller. Without programming the LLP controller will not know
> from where to start reading the LL for DMA processing. DMA
> controller requires the physical address of LL present on its side
> of DDR.
I guess "device side DDR offset" means this Xilinx device has some DDR
internal to the PCI device, and the CPU cannot access it via a BAR?
But you need addresses inside that device DDR even though the CPU
can't access it, and the VSEC gives you the base address of the DDR?
This makes me wonder about how dw_edma_region is used elsewhere
because some of those places seem like they assume the CPU *can*
access this area.
dw_pcie_edma_ll_alloc() uses dmam_alloc_coherent(), which allocates
RAM and gives you a CPU virtual address (ll->vaddr.mem) and a DMA
address (ll->paddr). dw_edma_pcie_probe() will later overwrite the
ll->paddr with the DDR offset based on VSEC.
But it sounds like ll->vaddr.mem is useless for Xilinx devices since
the CPU can't access the DDR?
If the CPU can't use ll->vaddr.mem, what happens in places like
dw_hdma_v0_write_ll_data() where we access it?
Bjorn
Powered by blists - more mailing lists