lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66aab3614bbab_21c08c29492@willemb.c.googlers.com.notmuch>
Date: Wed, 31 Jul 2024 17:57:53 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Randy Li <ayaka@...lik.info>, 
 Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: netdev@...r.kernel.org, 
 jasowang@...hat.com, 
 davem@...emloft.net, 
 edumazet@...gle.com, 
 kuba@...nel.org, 
 pabeni@...hat.com, 
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH] net: tuntap: add ioctl() TUNGETQUEUEINDX to fetch queue
 index

nits:

- INDX->INDEX. It's correct in the code
- prefix networking patches with the target tree: PATCH net-next

Randy Li wrote:
> 
> On 2024/7/31 22:12, Willem de Bruijn wrote:
> > Randy Li wrote:
> >> We need the queue index in qdisc mapping rule. There is no way to
> >> fetch that.
> > In which command exactly?
> 
> That is for sch_multiq, here is an example
> 
> tc qdisc add devĀ  tun0 root handle 1: multiq
> 
> tc filter add dev tun0 parent 1: protocol ip prio 1 u32 match ip dst 
> 172.16.10.1 action skbedit queue_mapping 0
> tc filter add dev tun0 parent 1: protocol ip prio 1 u32 match ip dst 
> 172.16.10.20 action skbedit queue_mapping 1
> 
> tc filter add dev tun0 parent 1: protocol ip prio 1 u32 match ip dst 
> 172.16.10.10 action skbedit queue_mapping 2

If using an IFF_MULTI_QUEUE tun device, packets are automatically
load balanced across the multiple queues, in tun_select_queue.

If you want more explicit queue selection than by rxhash, tun
supports TUNSETSTEERINGEBPF.

> 
> The purpose here is taking advantage of the multiple threads. For the 
> the server side(gateway of the tunnel's subnet), usually a different 
> peer would invoked a different encryption/decryption key pair, it would 
> be better to handle each in its own thread. Or the application would 
> need to implement a dispatcher here.

A thread in which context? Or do you mean queue?

> 
> I am newbie to the tc(8), I verified the command above with a tun type 
> multiple threads demo. But I don't know how to drop the unwanted ingress 
> filter here, the queue 0 may be a little broken.

Not opposed to exposing the queue index if there is a need. Not sure
yet that there is.

Also, since for an IFF_MULTI_QUEUE the queue_id is just assigned
iteratively, it can also be inferred without an explicit call.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ