[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y5EwunX89Nq59vf0@x130>
Date: Wed, 7 Dec 2022 16:32:58 -0800
From: Saeed Mahameed <saeed@...nel.org>
To: Coco Li <lixiaoyan@...gle.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Michael Chan <michael.chan@...adcom.com>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC net-next v5 2/2] bnxt: Use generic HBH removal helper in tx
path
On 07 Dec 14:54, Coco Li wrote:
>Eric Dumazet implemented Big TCP that allowed bigger TSO/GRO packet sizes
>for IPv6 traffic. See patch series:
>'commit 89527be8d8d6 ("net: add IFLA_TSO_{MAX_SIZE|SEGS} attributes")'
>
>This reduces the number of packets traversing the networking stack and
>should usually improves performance. However, it also inserts a
>temporary Hop-by-hop IPv6 extension header.
>
>Using the HBH header removal method in the previous path, the extra header
^ patch
>be removed in bnxt drivers to allow it to send big TCP packets (bigger
>TSO packets) as well.
>
I think Eric didn't expose this function because it isn't efficient for
drivers who are already processing the headers separately from payload for
LSO packets .. the trick is to have an optimized copy method depending on
your driver xmit function, usually you would just memcpy the TCP header over
the HBH exactly at the point you copy/process those headers into the HW
descriptor.
>Tested:
>Compiled locally
>
>To further test functional correctness, update the GSO/GRO limit on the
>physical NIC:
>
>ip link set eth0 gso_max_size 181000
>ip link set eth0 gro_max_size 181000
>
>Note that if there are bonding or ipvan devices on top of the physical
>NIC, their GSO sizes need to be updated as well.
>
>Then, IPv6/TCP packets with sizes larger than 64k can be observed.
>
>Big TCP functionality is tested by Michael, feature checks not yet.
>
>Tested by Michael:
>I've confirmed with our hardware team that this is supported by our
>chips, and I've tested it up to gso_max_size of 524280. Thanks.
>
>Tested-by: Michael Chan <michael.chan@...adcom.com>
>Reviewed-by: Michael Chan <michael.chan@...adcom.com>
>Signed-off-by: Coco Li <lixiaoyan@...gle.com>
>---
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 26 ++++++++++++++++++++++-
> 1 file changed, 25 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>index 0fe164b42c5d..6ba1cd342a80 100644
>--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>@@ -389,6 +389,9 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> return NETDEV_TX_BUSY;
> }
>
>+ if (unlikely(ipv6_hopopt_jumbo_remove(skb)))
>+ goto tx_free;
>+
> length = skb->len;
> len = skb_headlen(skb);
> last_frag = skb_shinfo(skb)->nr_frags;
>@@ -11315,6 +11318,7 @@ static bool bnxt_exthdr_check(struct bnxt *bp, struct sk_buff *skb, int nw_off,
> u8 **nextp)
> {
> struct ipv6hdr *ip6h = (struct ipv6hdr *)(skb->data + nw_off);
>+ struct hop_jumbo_hdr *jhdr;
> int hdr_count = 0;
> u8 *nexthdr;
> int start;
>@@ -11342,9 +11346,27 @@ static bool bnxt_exthdr_check(struct bnxt *bp, struct sk_buff *skb, int nw_off,
>
> if (hdrlen > 64)
> return false;
>+
>+ /* The ext header may be a hop-by-hop header inserted for
>+ * big TCP purposes. This will be removed before sending
>+ * from NIC, so do not count it.
>+ */
>+ if (*nexthdr == NEXTHDR_HOP) {
>+ if (likely(skb->len <= GRO_LEGACY_MAX_SIZE))
>+ goto increment_hdr;
>+
>+ jhdr = (struct hop_jumbo_hdr *)nexthdr;
>+ if (jhdr->tlv_type != IPV6_TLV_JUMBO || jhdr->hdrlen != 0 ||
>+ jhdr->nexthdr != IPPROTO_TCP)
>+ goto increment_hdr;
>+
>+ goto next_hdr;
>+ }
>+increment_hdr:
>+ hdr_count++;
>+next_hdr:
> nexthdr = &hp->nexthdr;
> start += hdrlen;
>- hdr_count++;
> }
> if (nextp) {
> /* Caller will check inner protocol */
>@@ -13657,6 +13679,8 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
> dev->features &= ~NETIF_F_LRO;
> dev->priv_flags |= IFF_UNICAST_FLT;
>
>+ netif_set_tso_max_size(dev, GSO_MAX_SIZE);
>+
> #ifdef CONFIG_BNXT_SRIOV
> init_waitqueue_head(&bp->sriov_cfg_wait);
> #endif
>--
>2.39.0.rc0.267.gcb52ba06e7-goog
>
Powered by blists - more mailing lists