[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <e0f9fb60-b09c-30ad-0670-aa77cc3b2e12@gmail.com>
Date: Thu, 1 Dec 2022 23:45:53 -0500
From: Etienne Champetier <champetier.etienne@...il.com>
To: netdev@...r.kernel.org
Subject: Multicast packet reordering
Hello all,
I'm investigating random multicast packet reordering between 2 containers
even under moderate traffic (16 video multicast, ~80mbps total) on Alma 8.
To simplify the testing, I've reproduced the issue using iperf2, then reproduced the issue on lo.
I can reproduce multicast packet reordering on lo on Alma 8, Alma 9, and Fedora 37, but not on CentOS 7.
As Fedora 37 is using 6.0.7-301.fc37.x86_64 I'm reporting here.
Using RPS fixes the issue, but to make it short:
- Is it expect to have multicast packet reordering when just tuning buffer sizes ?
- Does it make sense to use RPS to fix this issue / anything else / better ?
- In the case of 2 containers talking using veth + bridge, is it better to keep 1 queue
and set rps_cpus to all cpus, or some more complex tuning like 1 queue per cpu + rps on 1 cpu only ?
The details:
On a Dell R7515 / AMD EPYC 7702P 64-Core Processor (128 threads) / 1 NUMA node
For each OS I'm doing 3 tests
1) initial tuning
tuned-adm profile network-throughput
2) increase buffers
sysctl -f - <<'EOF'
net.core.netdev_max_backlog=250000
net.core.rmem_default=33554432
net.core.rmem_max=33554432
net.core.wmem_default=4194304
net.core.wmem_max=4194304
EOF
3) Enable RPS
echo ffffffff,ffffffff,ffffffff,ffffffff > /sys/class/net/lo/queues/rx-0/rps_cpus
I start the servers and the client
for i in {1..10}; do
iperf -s -u -B 239.255.255.$i%lo -i 1 &
done
iperf -c 239.255.255.1 -B 127.0.0.1 -u -i 1 -b 2G -l 1316 -P 10 --incr-dstip -t0
On Fedora 37, Alma 8&9:
test 1: I get drops and reordering
test 2: no drops but reordering
test 3: clean
On CentOS 7 I don't reach 2G, but I don't get reordering, I get drops at step 1,
all clean at step 2, and drops again when enabling RPS.
Best
Etienne
Powered by blists - more mailing lists