lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aXEEq5SvkdFYgG9z@lizhi-Precision-Tower-5810>
Date: Wed, 21 Jan 2026 11:54:03 -0500
From: Frank Li <Frank.li@....com>
To: "Verma, Devendra" <Devendra.Verma@....com>
Cc: "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
	"mani@...nel.org" <mani@...nel.org>,
	"vkoul@...nel.org" <vkoul@...nel.org>,
	"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Simek, Michal" <michal.simek@....com>
Subject: Re: [PATCH v8 1/2] dmaengine: dw-edma: Add AMD MDB Endpoint Support

On Tue, Jan 20, 2026 at 11:36:51AM +0000, Verma, Devendra wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
>
> > -----Original Message-----
> > From: Frank Li <Frank.li@....com>
> > Sent: Monday, January 19, 2026 9:30 PM
> > To: Verma, Devendra <Devendra.Verma@....com>
> > Cc: bhelgaas@...gle.com; mani@...nel.org; vkoul@...nel.org;
> > dmaengine@...r.kernel.org; linux-pci@...r.kernel.org; linux-
> > kernel@...r.kernel.org; Simek, Michal <michal.simek@....com>
> > Subject: Re: [PATCH v8 1/2] dmaengine: dw-edma: Add AMD MDB Endpoint
> > Support
> >
> > Caution: This message originated from an External Source. Use proper
> > caution when opening attachments, clicking links, or responding.
> >
> >
> > On Mon, Jan 19, 2026 at 09:09:11AM +0000, Verma, Devendra wrote:
> > > [AMD Official Use Only - AMD Internal Distribution Only]
> > >
> > > Hi Frank
> > >
> > > Please check my comments inline.
> > >
> > > Regards,
> > > Devendra
> > >
> > > > -----Original Message-----
> > > > From: Frank Li <Frank.li@....com>
> > > > Sent: Thursday, January 15, 2026 9:51 PM
> > > > To: Verma, Devendra <Devendra.Verma@....com>
> > > > Cc: bhelgaas@...gle.com; mani@...nel.org; vkoul@...nel.org;
> > > > dmaengine@...r.kernel.org; linux-pci@...r.kernel.org; linux-
> > > > kernel@...r.kernel.org; Simek, Michal <michal.simek@....com>
> > > > Subject: Re: [PATCH v8 1/2] dmaengine: dw-edma: Add AMD MDB
> > Endpoint
> > > > Support
> > > >
> > > > Caution: This message originated from an External Source. Use proper
> > > > caution when opening attachments, clicking links, or responding.
> > > >
> > > >
> > > > On Fri, Jan 09, 2026 at 05:33:53PM +0530, Devendra K Verma wrote:
> > > > > AMD MDB PCIe endpoint support. For AMD specific support added the
> > > > > following
> > > > >   - AMD supported PCIe Device IDs and Vendor ID (Xilinx).
> > > > >   - AMD MDB specific driver data
> > > > >   - AMD MDB specific VSEC capability to retrieve the device DDR
> > > > >     base address.
> > > > >
> > > > > Signed-off-by: Devendra K Verma <devendra.verma@....com>
> > > > > ---
> > > > > Changes in v8:
> > > > > Changed the contant names to includer product vendor.
> > > > > Moved the vendor specific code to vendor specific functions.
> > > > >
> > > > > Changes in v7:
> > > > > Introduced vendor specific functions to retrieve the vsec data.
> > > > >
> > > > > Changes in v6:
> > > > > Included "sizes.h" header and used the appropriate definitions
> > > > > instead of constants.
> > > > >
> > > > > Changes in v5:
> > > > > Added the definitions for Xilinx specific VSEC header id,
> > > > > revision, and register offsets.
> > > > > Corrected the error type when no physical offset found for device
> > > > > side memory.
> > > > > Corrected the order of variables.
> > > > >
> > > > > Changes in v4:
> > > > > Configured 8 read and 8 write channels for Xilinx vendor Added
> > > > > checks to validate vendor ID for vendor specific vsec id.
> > > > > Added Xilinx specific vendor id for vsec specific to Xilinx Added
> > > > > the LL and data region offsets, size as input params to function
> > > > > dw_edma_set_chan_region_offset().
> > > > > Moved the LL and data region offsets assignment to function for
> > > > > Xilinx specific case.
> > > > > Corrected comments.
> > > > >
> > > > > Changes in v3:
> > > > > Corrected a typo when assigning AMD (Xilinx) vsec id macro and
> > > > > condition check.
> > > > >
> > > > > Changes in v2:
> > > > > Reverted the devmem_phys_off type to u64.
> > > > > Renamed the function appropriately to suit the functionality for
> > > > > setting the LL & data region offsets.
> > > > >
> > > > > Changes in v1:
> > > > > Removed the pci device id from pci_ids.h file.
> > > > > Added the vendor id macro as per the suggested method.
> > > > > Changed the type of the newly added devmem_phys_off variable.
> > > > > Added to logic to assign offsets for LL and data region blocks in
> > > > > case more number of channels are enabled than given in
> > amd_mdb_data struct.
> > > > > ---
> > > > >  drivers/dma/dw-edma/dw-edma-pcie.c | 192
> > > > > ++++++++++++++++++++++++++++++++++---
> > > > >  1 file changed, 178 insertions(+), 14 deletions(-)
> > > > >
> > > > > diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > b/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > index 3371e0a7..2efd149 100644
> > > > > --- a/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > +++ b/drivers/dma/dw-edma/dw-edma-pcie.c
> > > > > @@ -14,14 +14,35 @@
> > > > >  #include <linux/pci-epf.h>
> > > > >  #include <linux/msi.h>
> > > > >  #include <linux/bitfield.h>
> > > > > +#include <linux/sizes.h>
> > > > >
> > > > >  #include "dw-edma-core.h"
> > > > >
> > > > > -#define DW_PCIE_VSEC_DMA_ID                  0x6
> > > > > -#define DW_PCIE_VSEC_DMA_BAR                 GENMASK(10, 8)
> > > > > -#define DW_PCIE_VSEC_DMA_MAP                 GENMASK(2, 0)
> > > > > -#define DW_PCIE_VSEC_DMA_WR_CH                       GENMASK(9, 0)
> > > > > -#define DW_PCIE_VSEC_DMA_RD_CH                       GENMASK(25, 16)
> > > > > +/* Synopsys */
> > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_ID         0x6
> > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_BAR                GENMASK(10,
> > 8)
> > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_MAP                GENMASK(2, 0)
> > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_WR_CH              GENMASK(9,
> > 0)
> > > > > +#define DW_PCIE_SYNOPSYS_VSEC_DMA_RD_CH              GENMASK(25,
> > 16)
> > > >
> > > > Sorry, jump into at v8.
> > > > According to my understand 'DW' means 'Synopsys'.
> > > >
> > >
> > > Yes, DW means Designware representing Synopsys here.
> > > For the sake of clarity, a distinction was required to separate the
> > > names of macros having the similar purpose for other IP, Xilinx in
> > > this case. Otherwise, it is causing confusion which macros to use for
> > > which vendor. This also helps in future if any of the vendors try to
> > > retrieve a new or different VSEC IDs then all they need is to define macros
> > which clearly show the association with the vendor, thus eliminating the
> > confusion.
> >
> > If want to reuse the driver, driver owner take reponsiblity to find the
> > difference.
> >
> > If define a whole set of register, the reader is hard to find real difference.
> >
>
> It is not regarding the register set rather VSEC capability which can differ
> in the purpose even for the similar IPs. As is the current case where one
> VSEC ID serves the similar purpose for both the IPs while the VSEC ID = 0x20
> differs in meaning for Synopsys and Xilinx thus I think it is OK to define new
> macros as long as they do not create confusion.

But need put rename existing macro to new patches. And I think
use XILINX_PCIE_MDB_* should be enough without renaming. Everyone know
DW is SYNOPSYS.

>
> > >
> > > > > +
> > > > > +/* AMD MDB (Xilinx) specific defines */
> > > > > +#define PCI_DEVICE_ID_XILINX_B054            0xb054
> > > > > +
> > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_ID               0x6
> > > > > +#define DW_PCIE_XILINX_MDB_VSEC_ID           0x20
> > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_BAR              GENMASK(10,
> > 8)
> > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_MAP              GENMASK(2,
> > 0)
> > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_WR_CH    GENMASK(9, 0)
> > > > > +#define DW_PCIE_XILINX_MDB_VSEC_DMA_RD_CH    GENMASK(25,
> > 16)
> > > >
> > > > These defination is the same. Need redefine again
> > > >
> > >
...
> > >
> > > As explained above, the name change is required to avoid confusion.
> > > The trigger to have the separate names for each IP is the inclusion of
> > > Xilinx IP that is why no separate patch is created.
> >
> > Separate patch renmae macro only. Reviewer can simple bypass this typo
> > trivial patch.
> >
> > Then add new one.
> >
> > Actually, Needn't rename at all.  You can directly use XLINNK_PCIE_*
> >
> > Frank
>
> Please check the discussion on previous versions of the same patch series.

It'd better you put link here to support your purposal.

> We have this patch as the outcome of those discussions.
> Other reviewing members felt it that keeping the name similar for the
> VSEC ID having similar purpose for two different IPs was causing the confusion
> that is why it was agreed upon the separate out the naming as per the
> vendor-name of VSEC ID. Regarding the separate patch, the reason is introduction
> of the new IP which mostly supports the similar functionality except in case of VSEC IDs
> that's why the name separation became part of these patches. It sets the context for
> changing the name of the existing macros.

If put trivial big part to new patch to reduce, it will reduce review
efforts.

Frank

>
> > >
> > > > >
> > > > >       pci_read_config_dword(pdev, vsec + 0x14, &val);
> > > > >       off = val;
> > > > > @@ -157,6 +237,67 @@ static void
> > > > dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev,
> > > > >       pdata->rg.off = off;
> > > > >  }
> > > > >
> > > > > +static void dw_edma_pcie_get_xilinx_dma_data(struct pci_dev *pdev,
> > > > > +                                          struct
> > > > > +dw_edma_pcie_data
> > > > > +*pdata) {
> > > > > +     u32 val, map;
> > > > > +     u16 vsec;
> > > > > +     u64 off;
> > > > > +
> > > > > +     pdata->devmem_phys_off =
> > DW_PCIE_XILINX_MDB_INVALID_ADDR;
> > > > > +
> > > > > +     vsec = pci_find_vsec_capability(pdev, PCI_VENDOR_ID_XILINX,
> > > > > +                                     DW_PCIE_XILINX_MDB_VSEC_DMA_ID);
> > > > > +     if (!vsec)
> > > > > +             return;
> > > > > +
> > > > > +     pci_read_config_dword(pdev, vsec + PCI_VNDR_HEADER, &val);
> > > > > +     if (PCI_VNDR_HEADER_REV(val) != 0x00 ||
> > > > > +         PCI_VNDR_HEADER_LEN(val) != 0x18)
> > > > > +             return;
> > > > > +
> > > > > +     pci_dbg(pdev, "Detected Xilinx PCIe Vendor-Specific Extended
> > > > > + Capability
> > > > DMA\n");
> > > > > +     pci_read_config_dword(pdev, vsec + 0x8, &val);
> > > > > +     map = FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_MAP, val);
> > > > > +     if (map != EDMA_MF_EDMA_LEGACY &&
> > > > > +         map != EDMA_MF_EDMA_UNROLL &&
> > > > > +         map != EDMA_MF_HDMA_COMPAT &&
> > > > > +         map != EDMA_MF_HDMA_NATIVE)
> > > > > +             return;
> > > > > +
> > > > > +     pdata->mf = map;
> > > > > +     pdata->rg.bar =
> > FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_BAR,
> > > > val);
> > > > > +
> > > > > +     pci_read_config_dword(pdev, vsec + 0xc, &val);
> > > > > +     pdata->wr_ch_cnt = min_t(u16, pdata->wr_ch_cnt,
> > > > > +
> > > > > + FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_WR_CH,
> > > > val));
> > > > > +     pdata->rd_ch_cnt = min_t(u16, pdata->rd_ch_cnt,
> > > > > +
> > > > > + FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_RD_CH, val));
> > > > > +
> > > > > +     pci_read_config_dword(pdev, vsec + 0x14, &val);
> > > > > +     off = val;
> > > > > +     pci_read_config_dword(pdev, vsec + 0x10, &val);
> > > > > +     off <<= 32;
> > > > > +     off |= val;
> > > > > +     pdata->rg.off = off;
> > > > > +
> > > > > +     vsec = pci_find_vsec_capability(pdev, PCI_VENDOR_ID_XILINX,
> > > > > +                                     DW_PCIE_XILINX_MDB_VSEC_ID);
> > > > > +     if (!vsec)
> > > > > +             return;
> > > > > +
> > > > > +     pci_read_config_dword(pdev,
> > > > > +                           vsec +
> > DW_PCIE_XILINX_MDB_DEVMEM_OFF_REG_HIGH,
> > > > > +                           &val);
> > > > > +     off = val;
> > > > > +     pci_read_config_dword(pdev,
> > > > > +                           vsec +
> > DW_PCIE_XILINX_MDB_DEVMEM_OFF_REG_LOW,
> > > > > +                           &val);
> > > > > +     off <<= 32;
> > > > > +     off |= val;
> > > > > +     pdata->devmem_phys_off = off; }
> > > > > +
> > > > >  static int dw_edma_pcie_probe(struct pci_dev *pdev,
> > > > >                             const struct pci_device_id *pid)  { @@
> > > > > -184,7 +325,28 @@ static int dw_edma_pcie_probe(struct pci_dev
> > *pdev,
> > > > >        * Tries to find if exists a PCIe Vendor-Specific Extended Capability
> > > > >        * for the DMA, if one exists, then reconfigures it.
> > > > >        */
> > > > > -     dw_edma_pcie_get_vsec_dma_data(pdev, vsec_data);
> > > > > +     dw_edma_pcie_get_synopsys_dma_data(pdev, vsec_data);
> > > > > +     dw_edma_pcie_get_xilinx_dma_data(pdev, vsec_data);
> > > > > +
> > > > > +     if (pdev->vendor == PCI_VENDOR_ID_XILINX) {
> > > >
> > > > dw_edma_pcie_get_xilinx_dma_data() should be here.
> > > >
> > > > Frank
> > >
> > > Yes, this is good suggestion. Thanks!
> > >
> > > > > +             /*
> > > > > +              * There is no valid address found for the LL memory
> > > > > +              * space on the device side.
> > > > > +              */
> > > > > +             if (vsec_data->devmem_phys_off ==
> > > > DW_PCIE_XILINX_MDB_INVALID_ADDR)
> > > > > +                     return -ENOMEM;
> > > > > +
> > > > > +             /*
> > > > > +              * Configure the channel LL and data blocks if number of
> > > > > +              * channels enabled in VSEC capability are more than the
> > > > > +              * channels configured in xilinx_mdb_data.
> > > > > +              */
> > > > > +             dw_edma_set_chan_region_offset(vsec_data, BAR_2, 0,
> > > > > +                                            DW_PCIE_XILINX_MDB_LL_OFF_GAP,
> > > > > +                                            DW_PCIE_XILINX_MDB_LL_SIZE,
> > > > > +                                            DW_PCIE_XILINX_MDB_DT_OFF_GAP,
> > > > > +                                            DW_PCIE_XILINX_MDB_DT_SIZE);
> > > > > +     }
> > > > >
> > > > >       /* Mapping PCI BAR regions */
> > > > >       mask = BIT(vsec_data->rg.bar); @@ -367,6 +529,8 @@ static
> > > > > void dw_edma_pcie_remove(struct pci_dev
> > > > > *pdev)
> > > > >
> > > > >  static const struct pci_device_id dw_edma_pcie_id_table[] = {
> > > > >       { PCI_DEVICE_DATA(SYNOPSYS, EDDA, &snps_edda_data) },
> > > > > +     { PCI_VDEVICE(XILINX, PCI_DEVICE_ID_XILINX_B054),
> > > > > +       (kernel_ulong_t)&xilinx_mdb_data },
> > > > >       { }
> > > > >  };
> > > > >  MODULE_DEVICE_TABLE(pci, dw_edma_pcie_id_table);
> > > > > --
> > > > > 1.8.3.1
> > > > >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ