[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aJopotHoEEbe6ll3@vaman>
Date: Mon, 11 Aug 2025 23:04:26 +0530
From: Vinod Koul <vkoul@...nel.org>
To: Logan Gunthorpe <logang@...tatee.com>
Cc: Kelvin Cao <kelvin.cao@...rochip.com>, dmaengine@...r.kernel.org,
linux-kernel@...r.kernel.org, George.Ge@...rochip.com,
christophe.jaillet@...adoo.fr, hch@...radead.org
Subject: Re: [PATCH v8 1/1] dmaengine: switchtec-dma: Introduce Switchtec DMA
engine PCI driver
On 05-08-25, 13:15, Logan Gunthorpe wrote:
> Hi Vinod,
>
> Kelvin has asked me to take over the upstreaming of the switchtec_dma
> driver. It's been over a year since the last posting but we intend on
> sending a new version shortly after the merge window. I've fixed a
> number of issues and gone through the bulk of the changes requested but
> there are two points that I'm going to have to push back on.
>
> This driver shares a lot of similarities with the plx_dma driver I wrote
> a few years ago and had similar issues.
>
> On 2024-04-07 06:21, Vinod Koul wrote:
> > On 18-03-24, 09:33, Kelvin Cao wrote:
> >> +static dma_cookie_t
> >> +switchtec_dma_tx_submit(struct dma_async_tx_descriptor *desc)
> >> + __releases(swdma_chan->submit_lock)
> >> +{
> >> + struct switchtec_dma_chan *swdma_chan =
> >> + to_switchtec_dma_chan(desc->chan);
> >> + dma_cookie_t cookie;
> >> +
> >> + cookie = dma_cookie_assign(desc);
> >> +
> >> + spin_unlock_bh(&swdma_chan->submit_lock);
> >
> > I was expecting desc to be pushing to pending list?? where is that done
>
> The driver maintains a pending list in DMA coherent memory. The pending
> list is written directly in switchtec_dma_prep_desc(). So there's
> nothing the hardware needs done when a new operation is submitted to the
> queue. When switchtec_dma_issue_pending() is called, the sq_tail pointer
> is updated which will signal the hardware to start pulling all the
> queued requests. I don't see any other way this could be done in the
> dmaengine framework.
>
> > Also consider using virt-dma for desc management, you dont need to
> > handle that on your own
>
> I looked into this and a virtual queue does not make sense to me for
> this device. The driver maintains it's own queue that hardware reads
> directly so there would be no benefit in having another queue that then
> needs to be copied to the queue the hardware reads from.
>
> >> +static enum dma_status switchtec_dma_tx_status(struct dma_chan *chan,
> >> + dma_cookie_t cookie,
> >> + struct dma_tx_state *txstate)
> >> +{
> >> + struct switchtec_dma_chan *swdma_chan = to_switchtec_dma_chan(chan);
> >> + enum dma_status ret;
> >> +
> >> + ret = dma_cookie_status(chan, cookie, txstate);
> >> + if (ret == DMA_COMPLETE)
> >> + return ret;
> >> +
> >> + switchtec_dma_process_desc(swdma_chan);
> >
> > This is *wrong*, you cannot process desc in status API, Please read the
> > documentation again and if in doubt pls ask
>
> I don't see any other way to do this. There's no other place to cleanup
> the completed descriptors. This is exactly what was done in the plx-dma
> driver which copied the IOAT driver that began this pattern. There was
> very similar feedback when I submitted the plx-dma driver[1] and I
> pointed out there's no other way to do this.
we should try fixing these
>
> I'll rename the function to make this a little clearer, but I can't see
> any other way to implement the driver without doing this.
Dont you get a notification from hardware on completion, that should
trigger this as well as issue_pending calls...
--
~Vinod
Powered by blists - more mailing lists