[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<SA1PR12MB8120661263F6B1202598A3439594A@SA1PR12MB8120.namprd12.prod.outlook.com>
Date: Fri, 23 Jan 2026 14:26:47 +0000
From: "Verma, Devendra" <Devendra.Verma@....com>
To: Frank Li <Frank.li@....com>
CC: "bhelgaas@...gle.com" <bhelgaas@...gle.com>, "mani@...nel.org"
<mani@...nel.org>, "vkoul@...nel.org" <vkoul@...nel.org>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Simek,
Michal" <michal.simek@....com>, "Verma, Devendra" <Devendra.Verma@....com>
Subject: RE: [PATCH v8 1/2] dmaengine: dw-edma: Add AMD MDB Endpoint Support
[AMD Official Use Only - AMD Internal Distribution Only]
Hi Frank
Please check my response inline.
Regards,
Devendra
> -----Original Message-----
> From: Frank Li <Frank.li@....com>
> Sent: Wednesday, January 21, 2026 10:24 PM
> To: Verma, Devendra <Devendra.Verma@....com>
> Cc: bhelgaas@...gle.com; mani@...nel.org; vkoul@...nel.org;
> dmaengine@...r.kernel.org; linux-pci@...r.kernel.org; linux-
> kernel@...r.kernel.org; Simek, Michal <michal.simek@....com>
> Subject: Re: [PATCH v8 1/2] dmaengine: dw-edma: Add AMD MDB Endpoint
> Support
>
[Removed some extra headers to reduce the size of the mail]
> > > > > On Fri, Jan 09, 2026 at 05:33:53PM +0530, Devendra K Verma wrote:
> > > > > > AMD MDB PCIe endpoint support. For AMD specific support added
> > > > > > the following
> > > > > > - AMD supported PCIe Device IDs and Vendor ID (Xilinx).
> > > > > > - AMD MDB specific driver data
> > > > > > - AMD MDB specific VSEC capability to retrieve the device DDR
> > > > > > base address.
> > > > > >
> > > > > > Signed-off-by: Devendra K Verma <devendra.verma@....com>
> > > > > > ---
> > > > > > Changes in v8:
> > > > > > Changed the contant names to includer product vendor.
> > > > > > Moved the vendor specific code to vendor specific functions.
> > > > > >
> > > > > > Changes in v7:
> > > > > > Introduced vendor specific functions to retrieve the vsec data.
> > > > > >
> > > > > > Changes in v6:
> > > > > > Included "sizes.h" header and used the appropriate definitions
> > > > > > instead of constants.
> > > > > >
> > > > > > Changes in v5:
> > > > > > Added the definitions for Xilinx specific VSEC header id,
> > > > > > revision, and register offsets.
> > > > > > Corrected the error type when no physical offset found for
> > > > > > device side memory.
> > > > > > Corrected the order of variables.
> > > > > >
> > > > > > Changes in v4:
> > > > > > Configured 8 read and 8 write channels for Xilinx vendor Added
> > > > > > checks to validate vendor ID for vendor specific vsec id.
> > > > > > Added Xilinx specific vendor id for vsec specific to Xilinx
> > > > > > Added the LL and data region offsets, size as input params to
> > > > > > function dw_edma_set_chan_region_offset().
> > > > > > Moved the LL and data region offsets assignment to function
> > > > > > for Xilinx specific case.
> > > > > > Corrected comments.
> > > > > >
> > > > > > Changes in v3:
> > > > > > Corrected a typo when assigning AMD (Xilinx) vsec id macro and
> > > > > > condition check.
> > > > > >
> > > > > > Changes in v2:
> > > > > > Reverted the devmem_phys_off type to u64.
> > > > > > Renamed the function appropriately to suit the functionality
> > > > > > for setting the LL & data region offsets.
> > > > > >
> > > > > > Changes in v1:
> > > > > > Removed the pci device id from pci_ids.h file.
> > > > > > Added the vendor id macro as per the suggested method.
> > > > > > Changed the type of the newly added devmem_phys_off variable.
> > > > > > Added to logic to assign offsets for LL and data region blocks
> > > > > > in case more number of channels are enabled than given in
> > > amd_mdb_data struct.
> > > > > > ---
> > > > > > drivers/dma/dw-edma/dw-edma-pcie.c | 192
> > > > > > ++++++++++++++++++++++++++++++++++---
> > > > > > 1 file changed, 178 insertions(+), 14 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > > b/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > > index 3371e0a7..2efd149 100644
> > > > > > --- a/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > > +++ b/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > > @@ -14,14 +14,35 @@
> > > > > > #include <linux/pci-epf.h>
> > > > > > #include <linux/msi.h>
> > > > > > #include <linux/bitfield.h>
> > > > > > +#include <linux/sizes.h>
> > > > > >
> > > > > > #include "dw-edma-core.h"
> > > > > >
> > > > > > -#define DW_PCIE_VSEC_DMA_ID 0x6
> > > > > > -#define DW_PCIE_VSEC_DMA_BAR GENMASK(10, 8)
> > > > > > -#define DW_PCIE_VSEC_DMA_MAP GENMASK(2, 0)
> > > > > > -#define DW_PCIE_VSEC_DMA_WR_CH GENMASK(9, 0)
> > > > > > -#define DW_PCIE_VSEC_DMA_RD_CH GENMASK(25,
> 16)
> > > > > > +/* Synopsys */
> > > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_ID 0x6
> > > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_BAR
> GENMASK(10,
> > > 8)
> > > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_MAP
> GENMASK(2, 0)
> > > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_WR_CH
> GENMASK(9,
> > > 0)
> > > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_RD_CH
> GENMASK(25,
> > > 16)
> > > > >
> > > > > Sorry, jump into at v8.
> > > > > According to my understand 'DW' means 'Synopsys'.
> > > > >
> > > >
> > > > Yes, DW means Designware representing Synopsys here.
> > > > For the sake of clarity, a distinction was required to separate
> > > > the names of macros having the similar purpose for other IP,
> > > > Xilinx in this case. Otherwise, it is causing confusion which
> > > > macros to use for which vendor. This also helps in future if any
> > > > of the vendors try to retrieve a new or different VSEC IDs then
> > > > all they need is to define macros
> > > which clearly show the association with the vendor, thus eliminating
> > > the confusion.
> > >
> > > If want to reuse the driver, driver owner take reponsiblity to find
> > > the difference.
> > >
> > > If define a whole set of register, the reader is hard to find real difference.
> > >
> >
> > It is not regarding the register set rather VSEC capability which can
> > differ in the purpose even for the similar IPs. As is the current case
> > where one VSEC ID serves the similar purpose for both the IPs while
> > the VSEC ID = 0x20 differs in meaning for Synopsys and Xilinx thus I
> > think it is OK to define new macros as long as they do not create confusion.
>
> But need put rename existing macro to new patches. And I think use
> XILINX_PCIE_MDB_* should be enough without renaming. Everyone know
> DW is SYNOPSYS.
>
As the current addition of the code related to Xilinx piggybacking on the existing
DW code the prefix "DW" was added to the macros related to Xilinx as well.
The format available in the file has been followed.
When a necessity arises then perhaps the files can be separated and the name
suggestion would clearly indicate the distinction and difference required between
these two drivers.
Thanks to the reviewer(s), I had received multiple reviews regarding the naming and finally
it was agreed upon the convention available in this version.
Please check the discussion in the following thread:
https://lore.kernel.org/all/SA1PR12MB81204C807C2F7C72730EC82295ADA@SA1PR12MB8120.namprd12.prod.outlook.com/
> >
> > > >
> > > > > > +
> > > > > > +/* AMD MDB (Xilinx) specific defines */
> > > > > > +#define PCI_DEVICE_ID_XILINX_B054 0xb054
> > > > > > +
> > > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_ID 0x6
> > > > > > +#define DW_PCIE_XILINX_MDB_VSEC_ID 0x20
> > > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_BAR
> GENMASK(10,
> > > 8)
> > > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_MAP
> GENMASK(2,
> > > 0)
> > > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_WR_CH GENMASK(9,
> 0)
> > > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_RD_CH GENMASK(25,
> > > 16)
> > > > >
> > > > > These defination is the same. Need redefine again
> > > > >
> > > >
> ...
> > > >
> > > > As explained above, the name change is required to avoid confusion.
> > > > The trigger to have the separate names for each IP is the
> > > > inclusion of Xilinx IP that is why no separate patch is created.
> > >
> > > Separate patch renmae macro only. Reviewer can simple bypass this
> > > typo trivial patch.
> > >
> > > Then add new one.
> > >
> > > Actually, Needn't rename at all. You can directly use XLINNK_PCIE_*
> > >
> > > Frank
> >
> > Please check the discussion on previous versions of the same patch series.
>
> It'd better you put link here to support your purposal.
>
> > We have this patch as the outcome of those discussions.
> > Other reviewing members felt it that keeping the name similar for the
> > VSEC ID having similar purpose for two different IPs was causing the
> > confusion that is why it was agreed upon the separate out the naming
> > as per the vendor-name of VSEC ID. Regarding the separate patch, the
> > reason is introduction of the new IP which mostly supports the similar
> > functionality except in case of VSEC IDs that's why the name
> > separation became part of these patches. It sets the context for changing
> the name of the existing macros.
>
> If put trivial big part to new patch to reduce, it will reduce review efforts.
>
> Frank
>
As requested, a link related to discussion in this regard is available on the above
comments, please take a look.
> >
> > > >
> > > > > >
> > > > > > pci_read_config_dword(pdev, vsec + 0x14, &val);
> > > > > > off = val;
> > > > > > @@ -157,6 +237,67 @@ static void
> > > > > dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev,
> > > > > > pdata->rg.off = off;
> > > > > > }
> > > > > >
> > > > > > +static void dw_edma_pcie_get_xilinx_dma_data(struct pci_dev
> *pdev,
> > > > > > + struct
> > > > > > +dw_edma_pcie_data
> > > > > > +*pdata) {
> > > > > > + u32 val, map;
> > > > > > + u16 vsec;
> > > > > > + u64 off;
> > > > > > +
> > > > > > + pdata->devmem_phys_off =
> > > DW_PCIE_XILINX_MDB_INVALID_ADDR;
> > > > > > +
> > > > > > + vsec = pci_find_vsec_capability(pdev, PCI_VENDOR_ID_XILINX,
> > > > > > + DW_PCIE_XILINX_MDB_VSEC_DMA_ID);
> > > > > > + if (!vsec)
> > > > > > + return;
> > > > > > +
> > > > > > + pci_read_config_dword(pdev, vsec + PCI_VNDR_HEADER, &val);
> > > > > > + if (PCI_VNDR_HEADER_REV(val) != 0x00 ||
> > > > > > + PCI_VNDR_HEADER_LEN(val) != 0x18)
> > > > > > + return;
> > > > > > +
> > > > > > + pci_dbg(pdev, "Detected Xilinx PCIe Vendor-Specific
> > > > > > + Extended Capability
> > > > > DMA\n");
> > > > > > + pci_read_config_dword(pdev, vsec + 0x8, &val);
> > > > > > + map = FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_MAP, val);
> > > > > > + if (map != EDMA_MF_EDMA_LEGACY &&
> > > > > > + map != EDMA_MF_EDMA_UNROLL &&
> > > > > > + map != EDMA_MF_HDMA_COMPAT &&
> > > > > > + map != EDMA_MF_HDMA_NATIVE)
> > > > > > + return;
> > > > > > +
> > > > > > + pdata->mf = map;
> > > > > > + pdata->rg.bar =
> > > FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_BAR,
> > > > > val);
> > > > > > +
> > > > > > + pci_read_config_dword(pdev, vsec + 0xc, &val);
> > > > > > + pdata->wr_ch_cnt = min_t(u16, pdata->wr_ch_cnt,
> > > > > > +
> > > > > > + FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_WR_CH,
> > > > > val));
> > > > > > + pdata->rd_ch_cnt = min_t(u16, pdata->rd_ch_cnt,
> > > > > > +
> > > > > > + FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_RD_CH, val));
> > > > > > +
> > > > > > + pci_read_config_dword(pdev, vsec + 0x14, &val);
> > > > > > + off = val;
> > > > > > + pci_read_config_dword(pdev, vsec + 0x10, &val);
> > > > > > + off <<= 32;
> > > > > > + off |= val;
> > > > > > + pdata->rg.off = off;
> > > > > > +
> > > > > > + vsec = pci_find_vsec_capability(pdev, PCI_VENDOR_ID_XILINX,
> > > > > > + DW_PCIE_XILINX_MDB_VSEC_ID);
> > > > > > + if (!vsec)
> > > > > > + return;
> > > > > > +
> > > > > > + pci_read_config_dword(pdev,
> > > > > > + vsec +
> > > DW_PCIE_XILINX_MDB_DEVMEM_OFF_REG_HIGH,
> > > > > > + &val);
> > > > > > + off = val;
> > > > > > + pci_read_config_dword(pdev,
> > > > > > + vsec +
> > > DW_PCIE_XILINX_MDB_DEVMEM_OFF_REG_LOW,
> > > > > > + &val);
> > > > > > + off <<= 32;
> > > > > > + off |= val;
> > > > > > + pdata->devmem_phys_off = off; }
> > > > > > +
> > > > > > static int dw_edma_pcie_probe(struct pci_dev *pdev,
> > > > > > const struct pci_device_id *pid)
> > > > > > { @@
> > > > > > -184,7 +325,28 @@ static int dw_edma_pcie_probe(struct pci_dev
> > > *pdev,
> > > > > > * Tries to find if exists a PCIe Vendor-Specific Extended Capability
> > > > > > * for the DMA, if one exists, then reconfigures it.
> > > > > > */
> > > > > > - dw_edma_pcie_get_vsec_dma_data(pdev, vsec_data);
> > > > > > + dw_edma_pcie_get_synopsys_dma_data(pdev, vsec_data);
> > > > > > + dw_edma_pcie_get_xilinx_dma_data(pdev, vsec_data);
> > > > > > +
> > > > > > + if (pdev->vendor == PCI_VENDOR_ID_XILINX) {
> > > > >
> > > > > dw_edma_pcie_get_xilinx_dma_data() should be here.
> > > > >
> > > > > Frank
> > > >
> > > > Yes, this is good suggestion. Thanks!
> > > >
> > > > > > + /*
> > > > > > + * There is no valid address found for the LL memory
> > > > > > + * space on the device side.
> > > > > > + */
> > > > > > + if (vsec_data->devmem_phys_off ==
> > > > > DW_PCIE_XILINX_MDB_INVALID_ADDR)
> > > > > > + return -ENOMEM;
> > > > > > +
> > > > > > + /*
> > > > > > + * Configure the channel LL and data blocks if number of
> > > > > > + * channels enabled in VSEC capability are more than the
> > > > > > + * channels configured in xilinx_mdb_data.
> > > > > > + */
> > > > > > + dw_edma_set_chan_region_offset(vsec_data, BAR_2, 0,
> > > > > > + DW_PCIE_XILINX_MDB_LL_OFF_GAP,
> > > > > > + DW_PCIE_XILINX_MDB_LL_SIZE,
> > > > > > + DW_PCIE_XILINX_MDB_DT_OFF_GAP,
> > > > > > + DW_PCIE_XILINX_MDB_DT_SIZE);
> > > > > > + }
> > > > > >
> > > > > > /* Mapping PCI BAR regions */
> > > > > > mask = BIT(vsec_data->rg.bar); @@ -367,6 +529,8 @@
> > > > > > static void dw_edma_pcie_remove(struct pci_dev
> > > > > > *pdev)
> > > > > >
> > > > > > static const struct pci_device_id dw_edma_pcie_id_table[] = {
> > > > > > { PCI_DEVICE_DATA(SYNOPSYS, EDDA, &snps_edda_data) },
> > > > > > + { PCI_VDEVICE(XILINX, PCI_DEVICE_ID_XILINX_B054),
> > > > > > + (kernel_ulong_t)&xilinx_mdb_data },
> > > > > > { }
> > > > > > };
> > > > > > MODULE_DEVICE_TABLE(pci, dw_edma_pcie_id_table);
> > > > > > --
> > > > > > 1.8.3.1
> > > > > >
Powered by blists - more mailing lists