lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b630ea02-3e31-4be9-b929-9b06d93bdc03@iogearbox.net>
Date: Mon, 29 Sep 2025 09:50:00 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Jordan Rife <jordan@...fe.io>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, kuba@...nel.org,
 davem@...emloft.net, razor@...ckwall.org, pabeni@...hat.com,
 willemb@...gle.com, sdf@...ichev.me, john.fastabend@...il.com,
 martin.lau@...nel.org, maciej.fijalkowski@...el.com,
 magnus.karlsson@...el.com, David Wei <dw@...idwei.uk>
Subject: Re: [PATCH net-next 16/20] netkit: Implement rtnl_link_ops->alloc

On 9/27/25 3:17 AM, Jordan Rife wrote:
> On Fri, Sep 19, 2025 at 11:31:49PM +0200, Daniel Borkmann wrote:
>> From: David Wei <dw@...idwei.uk>
>>
>> Implement rtnl_link_ops->alloc that allows the number of rx queues to be
>> set when netkit is created. By default, netkit has only a single rxq (and
>> single txq). The number of queues is deliberately not allowed to be changed
>> via ethtool -L and is fixed for the lifetime of a netkit instance.
>>
>> For netkit device creation, numrxqueues with larger than one rxq can be
>> specified. These rxqs are then mappable to real rxqs in physical netdevs:
>>
>>    ip link add numrxqueues 2 type netkit
>>
>> As a starting point, the limit of numrxqueues for netkit is currently set
>> to 2, but future work is going to allow mapping multiple real rxqs from
> 
> Is the reason for the limit just because QEMU can't take advantage of
> more today or is there some other technical limitation?

Mainly just to keep the initial series smaller, plan is to lift this to more
queues for both io_uring and af_xdp. QEMU supports multiple queues for af_xdp
but when I spoke to QEMU folks, there is still the issue that QEMU internally
needs to be able to support processing inbound traffic through multiple threads
so its not a backend but QEMU internal limitation atm.

>> physical netdevs, potentially at some point even from different physical
>> netdevs.
> 
> What would be the use case for having proxied queues from multiple
> physical netdevs to the same netkit device? Couldn't you just create
> multiple netkit devices, one per physical device?
Yes, multiple netkit devices would work as well in that case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ