[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191015143129.GA32565@AlexGordeev-DPT-VI0092>
Date: Tue, 15 Oct 2019 16:31:30 +0200
From: Alexander Gordeev <a.gordeev.box@...il.com>
To: Dan Carpenter <dan.carpenter@...cle.com>
Cc: devel@...verdev.osuosl.org, Michael Chen <micchen@...era.com>,
dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] dmaengine: avalon: Intel Avalon-MM DMA Interface
for PCIe
On Tue, Oct 15, 2019 at 04:19:55PM +0300, Dan Carpenter wrote:
> On Tue, Oct 15, 2019 at 01:24:50PM +0200, Alexander Gordeev wrote:
> > On Thu, Oct 10, 2019 at 02:30:34PM +0300, Dan Carpenter wrote:
> > > On Thu, Oct 10, 2019 at 10:51:45AM +0200, Alexander Gordeev wrote:
> > > > On Wed, Oct 09, 2019 at 09:53:23PM +0300, Dan Carpenter wrote:
> > > > > > > > + u32 *rd_flags = hw->dma_desc_table_rd.cpu_addr->flags;
> > > > > > > > + u32 *wr_flags = hw->dma_desc_table_wr.cpu_addr->flags;
> > > > > > > > + struct avalon_dma_desc *desc;
> > > > > > > > + struct virt_dma_desc *vdesc;
> > > > > > > > + bool rd_done;
> > > > > > > > + bool wr_done;
> > > > > > > > +
> > > > > > > > + spin_lock(lock);
> >
> > [*]
> >
> > > > > > > > +
> > > > > > > > + rd_done = (hw->h2d_last_id < 0);
> > > > > > > > + wr_done = (hw->d2h_last_id < 0);
> > > > > > > > +
> > > > > > > > + if (rd_done && wr_done) {
> > > > > > > > + spin_unlock(lock);
> > > > > > > > + return IRQ_NONE;
> > > > > > > > + }
> > > > > > > > +
> > > > > > > > + do {
> > > > > > > > + if (!rd_done && rd_flags[hw->h2d_last_id])
> > > > > > > > + rd_done = true;
> > > > > > > > +
> > > > > > > > + if (!wr_done && wr_flags[hw->d2h_last_id])
> > > > > > > > + wr_done = true;
> > > > > > > > + } while (!rd_done || !wr_done);
>
>
> Thinking about this some more, my initial instinct was still correct
> actually. If we're holding the lock to prevent the CPU from writing
> to it then how is hw->d2h_last_id updated in the other thread? Either
> it must deadlock or it must be a race condition.
hw->d2h_last_id and hw->h2d_last_id indexes are not expected to get
updated while within the handler.
It is wr_flags[hw->d2h_last_id] and/or rd_flags[hw->h2d_last_id] that
should be set from zero to one by the DMA controller once/before it
sends the MSI interrupt. Therefore, once we're in the handler, either
wr_flags[hw->d2h_last_id] or rd_flags[hw->h2d_last_id] should be non-
zero.
However, the Intel documentation uses a suspicious wording for description
of the above: "The Descriptor Controller writes a 1 to the done bit of
the status DWORD to indicate successful completion. The Descriptor
Controller also sends an MSI interrupt for the final descriptor.
After receiving this MSI, host software can poll the done bit to
determine status."
(The "the status DWORD" in the excerpt above are located in coherent DMA
memory-mapped arrays wr_flags[hw->d2h_last_id] and rd_flags[hw->h2d_last_id])
The confusing statement is "After receiving this MSI, host software can
poll the done bit to determine status." Why would one poll a bit *after*
the MSI received? In good design it should be set by the time the MSI
arrived. May be they meant "checked" rather than "polled" or implied the
CPU still could see zero in the status DWORD due to DMA memory incoherency..
Anyways, that has never been observed and I added the loop out of the
paranoia, it never loops for me.
HTH
> regards,
> dan carpenter
>
Powered by blists - more mailing lists