[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1455791573-5270-12-git-send-email-saeedm@mellanox.com>
Date: Thu, 18 Feb 2016 12:32:52 +0200
From: Saeed Mahameed <saeedm@...lanox.com>
To: "David S. Miller" <davem@...emloft.net>
Cc: netdev@...r.kernel.org, Or Gerlitz <ogerlitz@...lanox.com>,
Tal Alon <talal@...lanox.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Tariq Toukan <tariqt@...lanox.com>,
Rana Shahout <ranas@...lanox.com>,
Yevgeny Petrilin <yevgenyp@...lanox.com>,
Matthew Finlay <matt@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>
Subject: [PATCH net-next V1 11/12] net/mlx5e: Add TX stateless offloads for tunneling
From: Matthew Finlay <matt@...lanox.com>
Add support for TSO and TX checksum when using hw assisted,
tunneled offloads.
Signed-off-by: Matt Finlay <matt@...lanox.com>
Signed-off-by: Saeed Mahameed <saeedm@...lanox.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 22 ++++++++++++++++------
1 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 00d855a..6ce2884 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015, Mellanox Technologies. All rights reserved.
+ * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@@ -185,9 +185,14 @@ static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_sq *sq, struct sk_buff *skb)
memset(wqe, 0, sizeof(*wqe));
- if (likely(skb->ip_summed == CHECKSUM_PARTIAL))
- eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM;
- else
+ if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM;
+ if (skb->encapsulation)
+ eseg->cs_flags |= MLX5_ETH_WQE_L3_INNER_CSUM |
+ MLX5_ETH_WQE_L4_INNER_CSUM;
+ else
+ eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM;
+ } else
sq->stats.csum_offload_none++;
if (sq->cc != sq->prev_cc) {
@@ -200,8 +205,13 @@ static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_sq *sq, struct sk_buff *skb)
eseg->mss = cpu_to_be16(skb_shinfo(skb)->gso_size);
opcode = MLX5_OPCODE_LSO;
- ihs = skb_transport_offset(skb) + tcp_hdrlen(skb);
- payload_len = skb->len - ihs;
+
+ if (skb->encapsulation)
+ ihs = skb_inner_transport_offset(skb) + inner_tcp_hdrlen(skb);
+ else
+ ihs = skb_transport_offset(skb) + tcp_hdrlen(skb);
+
+ payload_len = skb->len - ihs;
wi->num_bytes = skb->len +
(skb_shinfo(skb)->gso_segs - 1) * ihs;
sq->stats.tso_packets++;
--
1.7.1
Powered by blists - more mailing lists