[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1428964322.8341.47.camel@xylophone.i.decadent.org.uk>
Date: Mon, 13 Apr 2015 23:32:02 +0100
From: Ben Hutchings <ben.hutchings@...ethink.co.uk>
To: Sergei Shtylyov <sergei.shtylyov@...entembedded.com>
Cc: netdev@...r.kernel.org, linux-kernel@...ts.codethink.co.uk,
Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@...esas.com>,
Mitsuhiro Kimura <mitsuhiro.kimura.kc@...esas.com>,
Yoshihiro Kaneko <ykaneko0929@...il.com>,
Yoshihiro Shimoda <yoshihiro.shimoda.uh@...esas.com>,
Alexander Duyck <alexander.h.duyck@...el.com>
Subject: Re: [PATCH net 1/4] sh_eth: Ensure proper ordering of descriptor
active bit write/read
On Fri, 2015-04-10 at 01:21 +0300, Sergei Shtylyov wrote:
> Hello.
>
> On 03/03/2015 03:52 AM, Ben Hutchings wrote:
>
> > When submitting a DMA descriptor, the active bit must be written last.
> > When reading a completed DMA descriptor, the active bit must be read
> > first.
>
> > Add memory barriers to ensure that this ordering is maintained.
>
> Looks like those should have been lighter dma_rmb() and dma_wmb() instead.
Yes, I think so. I missed those additions to the DMA API.
Ben.
> Correct, Alexander?
>
> > Signed-off-by: Ben Hutchings <ben.hutchings@...ethink.co.uk>
> > ---
> > drivers/net/ethernet/renesas/sh_eth.c | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
> > index 654b48d1e61a..5c212a833bcf 100644
> > --- a/drivers/net/ethernet/renesas/sh_eth.c
> > +++ b/drivers/net/ethernet/renesas/sh_eth.c
> > @@ -1410,6 +1410,8 @@ static int sh_eth_txfree(struct net_device *ndev)
> > txdesc = &mdp->tx_ring[entry];
> > if (txdesc->status & cpu_to_edmac(mdp, TD_TACT))
> > break;
> > + /* TACT bit must be checked before all the following reads */
> > + rmb();
> > /* Free the original skb. */
> > if (mdp->tx_skbuff[entry]) {
> > dma_unmap_single(&ndev->dev, txdesc->addr,
> > @@ -1447,6 +1449,8 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
> > limit = boguscnt;
> > rxdesc = &mdp->rx_ring[entry];
> > while (!(rxdesc->status & cpu_to_edmac(mdp, RD_RACT))) {
> > + /* RACT bit must be checked before all the following reads */
> > + rmb();
> > desc_status = edmac_to_cpu(mdp, rxdesc->status);
> > pkt_len = rxdesc->frame_length;
> >
> > @@ -1526,6 +1530,7 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
> > skb_checksum_none_assert(skb);
> > rxdesc->addr = dma_addr;
> > }
> > + wmb(); /* RACT bit must be set after all the above writes */
> > if (entry >= mdp->num_rx_ring - 1)
> > rxdesc->status |=
> > cpu_to_edmac(mdp, RD_RACT | RD_RFP | RD_RDEL);
> > @@ -2195,6 +2200,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev)
> > }
> > txdesc->buffer_length = skb->len;
> >
> > + wmb(); /* TACT bit must be set after all the above writes */
> > if (entry >= mdp->num_tx_ring - 1)
> > txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE);
> > else
>
> WBR, Sergei
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists