[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1347539671.13103.1542.camel@edumazet-glaptop>
Date: Thu, 13 Sep 2012 14:34:31 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Or Gerlitz <or.gerlitz@...il.com>
Cc: Shlomo Pongartz <shlomop@...lanox.com>,
Rick Jones <rick.jones2@...com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Tom Herbert <therbert@...gle.com>
Subject: Re: GRO aggregation
On Thu, 2012-09-13 at 14:05 +0200, Eric Dumazet wrote:
> But there is no real difference in throughput.
>
> diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> index 6c4f935..435c35e 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> @@ -96,8 +96,8 @@
> /* Receive fragment sizes; we use at most 4 fragments (for 9600 byte MTU
> * and 4K allocations) */
> enum {
> - FRAG_SZ0 = 512 - NET_IP_ALIGN,
> - FRAG_SZ1 = 1024,
> + FRAG_SZ0 = 1536 - NET_IP_ALIGN,
> + FRAG_SZ1 = 2048,
> FRAG_SZ2 = 4096,
> FRAG_SZ3 = MLX4_EN_ALLOC_SIZE
> };
>
Oh well, adding one prefetch() is giving ~10% more throughput.
I guess this mlx4 driver needs some care.
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 5aba5ec..547eec8 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -38,6 +38,7 @@
#include <linux/if_ether.h>
#include <linux/if_vlan.h>
#include <linux/vmalloc.h>
+#include <linux/prefetch.h>
#include "mlx4_en.h"
@@ -617,7 +618,8 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
!((dev->features & NETIF_F_LOOPBACK) ||
priv->validate_loopback))
goto next;
-
+ /* avoid cache miss in tcp_gro_receive() */
+ prefetch((char *)ethh + 64);
/*
* Packet is OK - process it.
*/
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists