[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6595b716cb0b37e9daf4202163b4567116d4b4e2.camel@redhat.com>
Date: Thu, 05 Aug 2021 13:16:37 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Coco Li <lixiaoyan@...gle.com>, netdev@...r.kernel.org
Cc: davem@...emloft.net, kuba@...nel.org,
Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH net-next 1/2] selftests/net: GRO coalesce test
Hello,
On Thu, 2021-08-05 at 07:36 +0000, Coco Li wrote:
> Implement a GRO testsuite that expects Linux kernel GRO behavior.
> All tests pass with the kernel software GRO stack. Run against a device
> with hardware GRO to verify that it matches the software stack.
>
> gro.c generates packets and sends them out through a packet socket. The
> receiver in gro.c (run separately) receives the packets on a packet
> socket, filters them by destination ports using BPF and checks the
> packet geometry to see whether GRO was applied.
>
> gro.sh provides a wrapper to run the gro.c in NIC loopback mode.
> It is not included in continuous testing because it modifies network
> configuration around a physical NIC: gro.sh sets the NIC in loopback
> mode, creates macvlan devices on the physical device in separate
> namespaces, and sends traffic generated by gro.c between the two
> namespaces to observe coalescing behavior.
I like this idea a lot!
Have you considered additionally run the same test of top of a veth
pair, and have such tests always enabled, so we could have some
coverage regardless of specific H/W available?
To do the above you should disable TSO on the veth sender peer and
enable GRO on the other end.
[...]
> + setup_ns
> + # Each test is run 3 times to deflake, because given the receive timing,
> + # not all packets that should coalesce will be considered in the same flow
> + # on every try.
I thought that tuning 'gro_flush_timeout' appropriatelly, you should be
able to control exactly which packets will be aggregated ???
Thanks!
Paolo
Powered by blists - more mailing lists