lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 2 Dec 2022 15:09:13 -0500
From:   Etienne Champetier <champetier.etienne@...il.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     netdev@...r.kernel.org
Subject: Re: Multicast packet reordering

Le ven. 2 déc. 2022 à 13:34, Jakub Kicinski <kuba@...nel.org> a écrit :
>
> On Thu, 1 Dec 2022 23:45:53 -0500 Etienne Champetier wrote:
> > Using RPS fixes the issue, but to make it short:
> > - Is it expect to have multicast packet reordering when just tuning buffer sizes ?
> > - Does it make sense to use RPS to fix this issue / anything else / better ?
> > - In the case of 2 containers talking using veth + bridge, is it better to keep 1 queue
> > and set rps_cpus to all cpus, or some more complex tuning like 1 queue per cpu + rps on 1 cpu only ?
>
> Yes, there are per-cpu queues in various places to help scaling,
> if you don't pin the sender to one CPU and it gets moved you can
> understandably get reordering w/ UDP (both on lo and veth).

Is enabling RPS a workaround that will continue to work in the long term,
or it just fixes this reordering "by accident" ?

And I guess pinning the sender to one CPU is also important when
sending via a real NIC,
not only moving packets internally ?

> As Andrew said that's considered acceptable.
> Unfortunately it's one of those cases where we need to relax
> the requirements / stray from the ideal world if we want parallel
> processing to not suck..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ