[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOdf3goC0eXSqdpdcq_-4wegMTBmYdK_uQOKUpjX7azvTamWDA@mail.gmail.com>
Date: Fri, 2 Dec 2022 15:09:13 -0500
From: Etienne Champetier <champetier.etienne@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org
Subject: Re: Multicast packet reordering
Le ven. 2 déc. 2022 à 13:34, Jakub Kicinski <kuba@...nel.org> a écrit :
>
> On Thu, 1 Dec 2022 23:45:53 -0500 Etienne Champetier wrote:
> > Using RPS fixes the issue, but to make it short:
> > - Is it expect to have multicast packet reordering when just tuning buffer sizes ?
> > - Does it make sense to use RPS to fix this issue / anything else / better ?
> > - In the case of 2 containers talking using veth + bridge, is it better to keep 1 queue
> > and set rps_cpus to all cpus, or some more complex tuning like 1 queue per cpu + rps on 1 cpu only ?
>
> Yes, there are per-cpu queues in various places to help scaling,
> if you don't pin the sender to one CPU and it gets moved you can
> understandably get reordering w/ UDP (both on lo and veth).
Is enabling RPS a workaround that will continue to work in the long term,
or it just fixes this reordering "by accident" ?
And I guess pinning the sender to one CPU is also important when
sending via a real NIC,
not only moving packets internally ?
> As Andrew said that's considered acceptable.
> Unfortunately it's one of those cases where we need to relax
> the requirements / stray from the ideal world if we want parallel
> processing to not suck..
Powered by blists - more mailing lists