[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-+Tvyz8x+F+VnZXYDToW-kC2MuPG5Lcna2W+CQwTOMybQ@mail.gmail.com>
Date: Mon, 28 Jan 2019 14:50:34 -0600
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Steffen Klassert <steffen.klassert@...unet.com>
Cc: Network Development <netdev@...r.kernel.org>,
Willem de Bruijn <willemb@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
"Jason A. Donenfeld" <Jason@...c4.com>
Subject: Re: [PATCH RFC v2 2/3] net: Support GRO/GSO fraglist chaining.
On Mon, Jan 28, 2019 at 2:53 AM Steffen Klassert
<steffen.klassert@...unet.com> wrote:
>
> This patch adds the core functions to chain/unchain
> GSO skbs at the frag_list pointer. This also adds
> a new GSO type SKB_GSO_FRAGLIST and a is_flist
> flag to napi_gro_cb which indicates that this
> flow will be GROed by fraglist chaining.
>
> Signed-off-by: Steffen Klassert <steffen.klassert@...unet.com>
> +struct sk_buff *skb_segment_list(struct sk_buff *skb,
> + netdev_features_t features,
> + unsigned int offset)
> +{
> + struct sk_buff *list_skb = skb_shinfo(skb)->frag_list;
> + unsigned int tnl_hlen = skb_tnl_header_len(skb);
> + unsigned int delta_truesize = 0;
> + unsigned int delta_len = 0;
> + struct sk_buff *tail = NULL;
> + struct sk_buff *nskb;
> +
> + skb_push(skb, -skb_network_offset(skb) + offset);
> +
> + skb_shinfo(skb)->frag_list = NULL;
> +
> + do {
> + nskb = list_skb;
> + list_skb = list_skb->next;
> +
> + if (!tail)
> + skb->next = nskb;
> + else
> + tail->next = nskb;
> +
> + tail = nskb;
> +
> + delta_len += nskb->len;
> + delta_truesize += nskb->truesize;
> +
> + skb_push(nskb, -skb_network_offset(nskb) + offset);
> +
> + if (!secpath_exists(nskb))
> + __skb_ext_copy(nskb, skb);
> +
> + memcpy(nskb->cb, skb->cb, sizeof(skb->cb));
> +
> + nskb->ip_summed = CHECKSUM_NONE;
> + nskb->csum_valid = 1;
> + nskb->tstamp = skb->tstamp;
> + nskb->dev = skb->dev;
> + nskb->queue_mapping = skb->queue_mapping;
> +
> + nskb->mac_len = skb->mac_len;
> + nskb->mac_header = skb->mac_header;
> + nskb->transport_header = skb->transport_header;
> + nskb->network_header = skb->network_header;
> + skb_dst_copy(nskb, skb);
> +
> + skb_headers_offset_update(nskb, skb_headroom(nskb) - skb_headroom(skb));
> + skb_copy_from_linear_data_offset(skb, -tnl_hlen,
> + nskb->data - tnl_hlen,
> + offset + tnl_hlen);
> +
> + if (skb_needs_linearize(nskb, features) &&
> + __skb_linearize(nskb)) {
> + kfree_skb_list(skb->next);
> + skb->next = NULL;
> + return ERR_PTR(-ENOMEM);
> + }
> + } while (list_skb);
> +
> + skb->truesize = skb->truesize - delta_truesize;
> + skb->data_len = skb->data_len - delta_len;
> + skb->len = skb->len - delta_len;
> +
> + skb_gso_reset(skb);
> +
> + skb->prev = tail;
> +
> + if (skb_needs_linearize(skb, features) &&
> + __skb_linearize(skb)) {
> + skb->next = NULL;
> + kfree_skb_list(skb->next);
inverse order
also, I would probably deduplicate with the same branch above in a new
err_linearize: block
Powered by blists - more mailing lists