[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aS8I5e2UguQ2/+uU@lizhi-Precision-Tower-5810>
Date: Tue, 2 Dec 2025 10:42:29 -0500
From: Frank Li <Frank.li@....com>
To: Koichiro Den <den@...inux.co.jp>
Cc: ntb@...ts.linux.dev, linux-pci@...r.kernel.org,
dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org,
mani@...nel.org, kwilczynski@...nel.org, kishon@...nel.org,
bhelgaas@...gle.com, corbet@....net, vkoul@...nel.org,
jdmason@...zu.us, dave.jiang@...el.com, allenbh@...il.com,
Basavaraj.Natikar@....com, Shyam-sundar.S-k@....com,
kurt.schwemmer@...rosemi.com, logang@...tatee.com,
jingoohan1@...il.com, lpieralisi@...nel.org, robh@...nel.org,
jbrunet@...libre.com, fancer.lancer@...il.com, arnd@...db.de,
pstanner@...hat.com, elfring@...rs.sourceforge.net
Subject: Re: [RFC PATCH v2 20/27] NTB: ntb_transport: Introduce remote eDMA
backed transport mode
On Tue, Dec 02, 2025 at 03:43:10PM +0900, Koichiro Den wrote:
> On Mon, Dec 01, 2025 at 04:41:05PM -0500, Frank Li wrote:
> > On Sun, Nov 30, 2025 at 01:03:58AM +0900, Koichiro Den wrote:
> > > Add a new transport backend that uses a remote DesignWare eDMA engine
> > > located on the NTB endpoint to move data between host and endpoint.
> > >
...
> > > +#include "ntb_edma.h"
> > > +
> > > +/*
> > > + * The interrupt register offsets below are taken from the DesignWare
> > > + * eDMA "unrolled" register map (EDMA_MF_EDMA_UNROLL). The remote eDMA
> > > + * backend currently only supports this layout.
> > > + */
> > > +#define DMA_WRITE_INT_STATUS_OFF 0x4c
> > > +#define DMA_WRITE_INT_MASK_OFF 0x54
> > > +#define DMA_WRITE_INT_CLEAR_OFF 0x58
> > > +#define DMA_READ_INT_STATUS_OFF 0xa0
> > > +#define DMA_READ_INT_MASK_OFF 0xa8
> > > +#define DMA_READ_INT_CLEAR_OFF 0xac
> >
> > Not sure why need access EDMA register because EMDA driver already export
> > as dmaengine driver.
>
> These are intended for EP use. In my current design I intentionally don't
> use the standard dw-edma dmaengine driver on the EP side.
why not?
>
> >
> > > +
> > > +#define NTB_EDMA_NOTIFY_MAX_QP 64
> > > +
...
> > > +
> > > + virq = irq_create_fwspec_mapping(&fwspec);
> > > + of_node_put(parent);
> > > + return (virq > 0) ? virq : -EINVAL;
> > > +}
> > > +
> > > +static irqreturn_t ntb_edma_isr(int irq, void *data)
> > > +{
> >
> > Not sue why dw_edma_interrupt_write/read() does work for your case. Suppose
> > just register callback for dmeengine.
>
> If we ran dw_edma_probe() on both the EP and RC sides and let the dmaengine
> callbacks handle int_status/int_clear, I think we could hit races. One side
> might clear a status bit before the other side has a chance to see it and
> invoke its callback. Please correct me if I'm missing something here.
You should use difference channel?
>
> To avoid that, in my current implementation, the RC side handles the
> status/int_clear registers in the usual way, and the EP side only tries to
> suppress needless edma_int as much as possible.
>
> That said, I'm now wondering if it would be better to set LIE=0/RIE=1 for
> the DMA transfer channels and LIE=1/RIE=0 for the notification channel.
> That would require some changes on dw-edma core.
If dw-edma work as remote DMA, which should enable RIE. like
dw-edma-pcie.c, but not one actually use it recently.
Use EDMA as doorbell should be new case and I think it is quite useful.
> >
> > > + struct ntb_edma_interrupt *v = data;
> > > + u32 mask = BIT(EDMA_RD_CH_NUM);
> > > + u32 i, val;
> > > +
...
> > > + ret = dw_edma_probe(chip);
> >
> > I think dw_edma_probe() should be in ntb_hw_epf.c, which provide DMA
> > dma engine support.
> >
> > EP side, suppose default dwc controller driver already setup edma engine,
> > so use correct filter function, you should get dma chan.
>
> I intentionally hid edma for EP side in .dts patch in [RFC PATCH v2 26/27]
> so that RC side only manages eDMA remotely and avoids the potential race
> condition I mentioned above.
Improve eDMA core to suppport some dma channel work at local, some for
remote.
Frank
>
> Thanks for reviewing,
> Koichiro
>
> >
> > Frank
> >
> > > + if (ret) {
> > > + dev_err(&ndev->dev, "dw_edma_probe failed: %d\n", ret);
> > > + return ret;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
...
> > > +{
> > > + spin_lock_init(&qp->ep_tx_lock);
> > > + spin_lock_init(&qp->ep_rx_lock);
> > > + spin_lock_init(&qp->rc_lock);
> > > +}
> > > +
> > > +static const struct ntb_transport_backend_ops edma_backend_ops = {
> > > + .setup_qp_mw = ntb_transport_edma_setup_qp_mw,
> > > + .tx_free_entry = ntb_transport_edma_tx_free_entry,
> > > + .tx_enqueue = ntb_transport_edma_tx_enqueue,
> > > + .rx_enqueue = ntb_transport_edma_rx_enqueue,
> > > + .rx_poll = ntb_transport_edma_rx_poll,
> > > + .debugfs_stats_show = ntb_transport_edma_debugfs_stats_show,
> > > +};
> > > +#endif /* CONFIG_NTB_TRANSPORT_EDMA */
> > > +
> > > /**
> > > * ntb_transport_link_up - Notify NTB transport of client readiness to use queue
> > > * @qp: NTB transport layer queue to be enabled
> > > --
> > > 2.48.1
> > >
Powered by blists - more mailing lists