lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 May 2007 21:10:17 -0500 (CDT)
From:	Kumar Gala <galak@...nel.crashing.org>
To:	Scott Wood <scottwood@...escale.com>
cc:	jgarzik@...ox.com, netdev@...r.kernel.org, linuxppc-dev@...abs.org
Subject: Re: [PATCH v2] gianfar: Add I/O barriers when touching buffer
 descriptor ownership.

On Wed, 2 May 2007, Scott Wood wrote:

> Kumar Gala wrote:
> > Why doesn't marking the bdp pointer volatile resolve the issue in
> > gfar_clean_rx_ring() to ensure load ordering?
>
> Because that only addresses compiler reordering (and does so in a rather
> clumsy way -- not all accesses need to be strongly ordered), not hardware
> reordering.
>
> -Scott
>

So what about some thing like this where we do the read only once?

- k

diff --git a/drivers/net/gianfar.c b/drivers/net/gianfar.c
index a06d8d1..9cd7d1e 100644
--- a/drivers/net/gianfar.c
+++ b/drivers/net/gianfar.c
@@ -1438,31 +1438,35 @@ int gfar_clean_rx_ring(struct net_device *dev, int rx_work_limit)
 {
 	struct rxbd8 *bdp;
 	struct sk_buff *skb;
-	u16 pkt_len;
+	u16 pkt_len, status;
+	u32 bd_info;
 	int howmany = 0;
 	struct gfar_private *priv = netdev_priv(dev);

 	/* Get the first full descriptor */
 	bdp = priv->cur_rx;

-	while (!((bdp->status & RXBD_EMPTY) || (--rx_work_limit < 0))) {
+	bd_info = *bdp;
+
+	status = bd_info >> 16;
+	/* Remove the FCS from the packet length */
+	pkt_len = (bd_info & 0xffff) - 4;
+
+	while (!((status & RXBD_EMPTY) || (--rx_work_limit < 0))) {
 		skb = priv->rx_skbuff[priv->skb_currx];

-		if (!(bdp->status &
+		if (!(status &
 		      (RXBD_LARGE | RXBD_SHORT | RXBD_NONOCTET
 		       | RXBD_CRCERR | RXBD_OVERRUN | RXBD_TRUNCATED))) {
 			/* Increment the number of packets */
 			priv->stats.rx_packets++;
 			howmany++;

-			/* Remove the FCS from the packet length */
-			pkt_len = bdp->length - 4;
-
 			gfar_process_frame(dev, skb, pkt_len);

 			priv->stats.rx_bytes += pkt_len;
 		} else {
-			count_errors(bdp->status, priv);
+			count_errors(status, priv);

 			if (skb)
 				dev_kfree_skb_any(skb);
@@ -1480,7 +1484,7 @@ int gfar_clean_rx_ring(struct net_device *dev, int rx_work_limit)
 		priv->rx_skbuff[priv->skb_currx] = skb;

 		/* Update to the next pointer */
-		if (bdp->status & RXBD_WRAP)
+		if (status & RXBD_WRAP)
 			bdp = priv->rx_bd_base;
 		else
 			bdp++;
@@ -1490,6 +1494,11 @@ int gfar_clean_rx_ring(struct net_device *dev, int rx_work_limit)
 		    (priv->skb_currx +
 		     1) & RX_RING_MOD_MASK(priv->rx_ring_size);

+		bd_info = *bdp;
+
+		status = bd_info >> 16;
+		/* Remove the FCS from the packet length */
+		pkt_len = (bd_info & 0xffff) - 4;
 	}

 	/* Update the current rxbd pointer to be the next one */
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ