[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UccfvfwLemZ5XcWoZPupQi6U2rfjbGjfjFfvJptMcNYeQ@mail.gmail.com>
Date: Wed, 25 Apr 2018 14:02:10 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: Netdev <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>,
Dimitris Michailidis <dmichail@...gle.com>,
Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH net-next 02/10] udp: add gso
On Wed, Apr 25, 2018 at 1:51 PM, Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
>> It might be nice if you could break this into two patches. One for
>> actually doing the GSO in software, and another enabling the stack to
>> request it.
>
> Okay.
>
>>> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
>>> index d274059529eb..a4a5c0c5cba8 100644
>>> --- a/include/linux/skbuff.h
>>> +++ b/include/linux/skbuff.h
>>> @@ -573,6 +573,8 @@ enum {
>>> SKB_GSO_ESP = 1 << 15,
>>>
>>> SKB_GSO_UDP = 1 << 16,
>>> +
>>> + SKB_GSO_UDP_L4 = 1 << 17,
>>> };
>>>
>>
>> Part of me really wishes we could just rename SKB_GSO_UDP to something
>> like SKB_GFO_UDP since GSO implies "segmentation" but what we really
>> mean is "fragmentation" in the case of the existing UDP offload.
>
> I have been itching to rename the UFO flag, though I then find
> SKB_GSO_UFO easier to differentiate from SKB_GSO_UDP
> than SKB_GFO_UDP.
>
> But, the uapi headers will continue to have
> VIRTIO_NET_HDR_GSO_UDP for UFO, so changing this
> might only further complicate things.
Agreed. I was just venting frustration that we didn't do something
about it sooner.
>>> +struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
>>> + netdev_features_t features,
>>> + unsigned int mss, __sum16 check)
>>> +{
>>> + struct udphdr *uh = udp_hdr(gso_skb);
>>> + struct sk_buff *segs;
>>> + unsigned int hdrlen;
>>> +
>>> + if (gso_skb->len <= sizeof(*uh) + mss)
>>> + return ERR_PTR(-EINVAL);
>>> +
>>> + uh->len = htons(sizeof(*uh) + mss);
>>> + uh->check = check;
>>> + skb_pull(gso_skb, sizeof(*uh));
>>> + hdrlen = gso_skb->data - skb_mac_header(gso_skb);
>>
>> Normally I don't believe we modify the headers before calling
>> skb_segment. Normally that happens after via a while (skb->next) {}
>> type loop.
>
> Setting the header for all but the last segment before segmentation
> avoided the need for a while loop, at least before the wmem_alloc
> patch.
>
> With that patch, the argument no longer really holds, so I can convert
> if that is needed for other features, like GSO_PARTIAL.
Thanks.
>> That way for things like GSO_PARTIAL we can update after segmentation
>> since there are only going to be 2 segments most likely instead of
>> multiple MSS sized segments.
>
> I don't quite follow. Which two segments?
When we do GSO partial we end up with 2 segments. One really big one
that is a multiple of MSS and the remainder assuming the frame is odd
sized. The idea is we can just replicate all of the headers from the
outer IP header to the inner transport header in hardware so we do all
the updates based on that assumption and then we do the standard
segmentation update on the tail skb.
>>> +
>>> + segs = skb_segment(gso_skb, features);
>>> + if (unlikely(IS_ERR_OR_NULL(segs)))
>>> + return segs;
>>> +
>>> + /* If last packet is not full, fix up its header */
>>> + if (segs->prev->len != hdrlen + mss) {
>>> + unsigned int mss_last = segs->prev->len - hdrlen;
>>> +
>>> + uh = udp_hdr(segs->prev);
>>> + uh->len = htons(sizeof(*uh) + mss_last);
>>> + csum_replace2(&uh->check, htons(mss), htons(mss_last));
>>> + }
>>> +
>>
>> You could probably just assume that this last segment is always going
>> to need to be updated regardless. If you are doing any sort of
>> segmentation the last segment will always need the length and checksum
>> updated anyway, and since you probably need to move over to the "while
>> (skb->next)" style loop then you could just assume the last segment
>> needs updating.
>
> Ack. If we have to do a loop and set everything in there, I'll just write
> all headers unconditionally.
Powered by blists - more mailing lists