[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6618578fc34fa_36e52529429@willemb.c.googlers.com.notmuch>
Date: Thu, 11 Apr 2024 17:35:11 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Richard Gobert <richardbgobert@...il.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com,
shuah@...nel.org,
dsahern@...nel.org,
aduyck@...antis.com,
netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH net-next v6 5/6] net: gro: move L3 flush checks to
tcp_gro_receive and udp_gro_receive_segment
Richard Gobert wrote:
> Willem de Bruijn wrote:
> > Richard Gobert wrote:
> >> {inet,ipv6}_gro_receive functions perform flush checks (ttl, flags,
> >> iph->id, ...) against all packets in a loop. These flush checks are used
> >> currently only in tcp flows in GRO.
> >>
> >> These checks need to be done only once in tcp_gro_receive and only against
> >> the found p skb, since they only affect flush and not same_flow.
> >
> > I don't quite understand where the performance improvements arise.
> > As inet_gro_receive will skip any p that does not match:
> >
> > if (!NAPI_GRO_CB(p)->same_flow)
> > continue;
> >
> > iph2 = (struct iphdr *)(p->data + off);
> > /* The above works because, with the exception of the top
> > * (inner most) layer, we only aggregate pkts with the same
> > * hdr length so all the hdrs we'll need to verify will start
> > * at the same offset.
> > */
> > if ((iph->protocol ^ iph2->protocol) |
> > ((__force u32)iph->saddr ^ (__force u32)iph2->saddr) |
> > ((__force u32)iph->daddr ^ (__force u32)iph2->daddr)) {
> > NAPI_GRO_CB(p)->same_flow = 0;
> > continue;
> > }
> >
> > So these checks are already only performed against a p that matches.
> >
>
>
> Thanks for the review!
>
> flush/flush_id is calculated for all other p with same_flow = 1 (which is
> not always determined to be 0 before inet_gro_receive) and same src/dst
> addr in the bucket. Moving it to udp_gro_receive_segment/tcp_gro_receive
> will make it run only once when a matching p is found.
So this optimization is for flows that are the same up to having the
same saddr/daddr. Aside from stress tests, it seems rare to have many
concurrent flows between the same pair of machines?
>
> In addition, UDP flows where skb_gro_receive_list is called -
> flush/flush_id is not relevant and does not need to be calculated.
That makes sense
> In these
> cases total CPU time in GRO should drop. I could post perf numbers for
> this flow as well.
>
>
> >> Leveraging the previous commit in the series, in which correct network
> >> header offsets are saved for both outer and inner network headers -
> >> allowing these checks to be done only once, in tcp_gro_receive. As a
> >
> > Comments should be updated to reflect both TCP and L4 UDP. Can
> > generalize to transport callbacks.
> >
> >> result, NAPI_GRO_CB(p)->flush is not used at all. In addition, flush_id
> >> checks are more declarative and contained in inet_gro_flush, thus removing
> >> the need for flush_id in napi_gro_cb.
> >>
> >> This results in less parsing code for UDP flows and non-loop flush tests
> >> for TCP flows.
> >
> > This moves network layer tests out of the network layer callbacks into
> > helpers called from the transport layer callback. And then the helper
> > has to look up the network layer header and demultiplex the protocol
> > again:
> >
> > + if (((struct iphdr *)nh)->version == 6)
> > + flush |= ipv6_gro_flush(nh, nh2);
> > + else
> > + flush |= inet_gro_flush(nh, nh2, p, i != encap_mark);
> >
> > That just seems a bit roundabout.
>
> IMO this commit could be a part of a larger change, where all
> loops in gro_list_prepare, inet_gro_receive and ipv6_gro_receive can be
> removed, and the logic for finding a matching p will be moved to L4. This
> means that when p is found, the rest of the gro_list would not need to be
> traversed and thus would not even dirty cache lines at all. I can provide a
> code snippet which would explain it better.
These loops are exactly the mechanism to find a matching p. Though
with all the callbacks perhaps not the most efficient model. The
hashtable should have solved much of that.
Yes, please share a snippet to understand how you would replace this.
In the meantime, I do suggest sending the first two patches to net,
as they have Fixes tags. And then follow up with this for net-next
separately.
Powered by blists - more mailing lists