[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240718190221.2219835-1-pkaligineedi@google.com>
Date: Thu, 18 Jul 2024 12:02:21 -0700
From: Praveen Kaligineedi <pkaligineedi@...gle.com>
To: netdev@...r.kernel.org
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, willemb@...gle.com, shailend@...gle.com,
hramamurthy@...gle.com, csully@...gle.com, jfraker@...gle.com,
stable@...r.kernel.org, Bailey Forrest <bcf@...gle.com>,
Praveen Kaligineedi <pkaligineedi@...gle.com>, Jeroen de Borst <jeroendb@...gle.com>
Subject: [PATCH net] gve: Fix an edge case for TSO skb validity check
From: Bailey Forrest <bcf@...gle.com>
The NIC requires each TSO segment to not span more than 10
descriptors. gve_can_send_tso() performs this check. However,
the current code misses an edge case when a TSO skb has a large
frag that needs to be split into multiple descriptors, causing
the 10 descriptor limit per TSO-segment to be exceeded. This
change fixes the edge case.
Fixes: a57e5de476be ("gve: DQO: Add TX path")
Signed-off-by: Praveen Kaligineedi <pkaligineedi@...gle.com>
Signed-off-by: Bailey Forrest <bcf@...gle.com>
Reviewed-by: Jeroen de Borst <jeroendb@...gle.com>
---
drivers/net/ethernet/google/gve/gve_tx_dqo.c | 22 +++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
index 0b3cca3fc792..dc39dc481f21 100644
--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
@@ -866,22 +866,42 @@ static bool gve_can_send_tso(const struct sk_buff *skb)
const int header_len = skb_tcp_all_headers(skb);
const int gso_size = shinfo->gso_size;
int cur_seg_num_bufs;
+ int last_frag_size;
int cur_seg_size;
int i;
cur_seg_size = skb_headlen(skb) - header_len;
+ last_frag_size = skb_headlen(skb);
cur_seg_num_bufs = cur_seg_size > 0;
for (i = 0; i < shinfo->nr_frags; i++) {
if (cur_seg_size >= gso_size) {
cur_seg_size %= gso_size;
cur_seg_num_bufs = cur_seg_size > 0;
+
+ /* If the last buffer is split in the middle of a TSO
+ * segment, then it will count as two descriptors.
+ */
+ if (last_frag_size > GVE_TX_MAX_BUF_SIZE_DQO) {
+ int last_frag_remain = last_frag_size %
+ GVE_TX_MAX_BUF_SIZE_DQO;
+
+ /* If the last frag was evenly divisible by
+ * GVE_TX_MAX_BUF_SIZE_DQO, then it will not be
+ * split in the current segment.
+ */
+ if (last_frag_remain &&
+ cur_seg_size > last_frag_remain) {
+ cur_seg_num_bufs++;
+ }
+ }
}
if (unlikely(++cur_seg_num_bufs > max_bufs_per_seg))
return false;
- cur_seg_size += skb_frag_size(&shinfo->frags[i]);
+ last_frag_size = skb_frag_size(&shinfo->frags[i]);
+ cur_seg_size += last_frag_size;
}
return true;
--
2.45.2.993.g49e7a77208-goog
Powered by blists - more mailing lists