[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190507171042.GS16052@vkoul-mobl>
Date: Tue, 7 May 2019 22:40:43 +0530
From: Vinod Koul <vkoul@...nel.org>
To: Kazuhiro Kasai <kasai.kazuhiro@...ionext.com>
Cc: robh+dt@...nel.org, mark.rutland@....com,
dmaengine@...r.kernel.org, devicetree@...r.kernel.org,
orito.takao@...ionext.com, sugaya.taichi@...ionext.com,
kanematsu.shinji@...ionext.com, jaswinder.singh@...aro.org,
masami.hiramatsu@...aro.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
On 07-05-19, 14:39, Kazuhiro Kasai wrote:
> On Fri, Apr 26, 2019 at 17:16 +0530, Vinod Koul wrote:
> > On 25-03-19, 13:15, Kazuhiro Kasai wrote:
> > > +struct m10v_dma_chan {
> > > + struct dma_chan chan;
> > > + struct m10v_dma_device *mdmac;
> > > + void __iomem *regs;
> > > + int irq;
> > > + struct m10v_dma_desc mdesc;
> >
> > So there is a *single* descriptor? Not a list??
>
> Yes, single descriptor.
And why is that, you can create a list and keep getting descriptors and
issue them to hardware and get better pref!
> > > +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> > > +{
> > > + struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> > > + dma_cookie_t cookie;
> > > + unsigned long flags;
> > > +
> > > + spin_lock_irqsave(&mchan->lock, flags);
> > > + cookie = dma_cookie_assign(txd);
> > > + spin_unlock_irqrestore(&mchan->lock, flags);
> > > +
> > > + return cookie;
> >
> > sounds like vchan_tx_submit() i think you can use virt-dma layer and then
> > get rid of artificial limit in driver and be able to queue up the txn on
> > dmaengine.
>
> OK, I will try to use virt-dma layer in next version.
And you will get lists to manage descriptor for free! so you can use
that to support multiple txns as well!
> > > +static struct dma_async_tx_descriptor *
> > > +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> > > + dma_addr_t src, size_t len, unsigned long flags)
> > > +{
> > > + struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> > > +
> > > + dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> > > + mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> > > + mchan->mdesc.txd.callback = NULL;
> > > + mchan->mdesc.txd.flags = flags;
> > > + mchan->mdesc.txd.cookie = -EBUSY;
> > > +
> > > + mchan->mdesc.len = len;
> > > + mchan->mdesc.src = src;
> > > + mchan->mdesc.dst = dst;
> > > +
> > > + return &mchan->mdesc.txd;
> >
> > So you support single descriptor and dont check if this has been already
> > configured. So I guess this has been tested by doing txn one at a time
> > and not submitted bunch of txn and wait for them to complete. Please fix
> > that to really enable dmaengine capabilities.
>
> Thank you for advice. I want to fix it and I have 2 questions.
>
> 1. Does virt-dma layer help to fix this?
Yes
> 2. Can dmatest test that dmaengine capabilities?
Yes for memcpy operations, see Documentation/driver-api/dmaengine/dmatest.rst
--
~Vinod
Powered by blists - more mailing lists