lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1464873144.5939.177.camel@edumazet-glaptop3.roam.corp.google.com>
Date:	Thu, 02 Jun 2016 06:12:24 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Alexander Duyck <aduyck@...antis.com>
Cc:	netdev@...r.kernel.org, davem@...emloft.net,
	alexander.duyck@...il.com
Subject: Re: [net-next PATCH 2/2] tun: Configure Rx queues to default to RPS
 enabled

On Wed, 2016-06-01 at 18:17 -0700, Alexander Duyck wrote:
> This patch enables tun/tap interfaces to use RPS by default.  The
> motivation behind this is to address the fact that the interfaces are
> currently using netif_rx_ni which in turn will queue packets on whatever
> CPU the function is called on, and when combined with load balancing this
> can result in packets being received out of order.

Hmm...

I do not believe this can be made the default, this would be a major
regression in some cases.

Some users want cpu isolation. This is their number one priority.

If they use one cpu to feed packets through tun device, they do not want
to spread a DDOS to all online cpus. Traffic from one VM would hurt all
others.

We have ways to avoid reorders already in  TX path (skb->ooo_okay) and
receive path in RFS layer.

tun could probably avoid reorders using a similar technique.

netif_rx_ni() could be extended to give a hint on the cpu that processed
prior packets.

(This would be a new function)

If the prior cpu is different than current cpu, we have to look at the
backlog of prior cpu.
If not empty, and prior cpu online, we need to queue the packet to prior
cpu queue.
If empty, we can 'switch' to the new cpu queue.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ