lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240222150030.68879f04@kernel.org>
Date: Thu, 22 Feb 2024 15:00:30 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Tariq Toukan <ttoukan.linux@...il.com>, Saeed Mahameed
 <saeed@...nel.org>, "David S. Miller" <davem@...emloft.net>, Paolo Abeni
 <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>, Saeed Mahameed
 <saeedm@...dia.com>, netdev@...r.kernel.org, Tariq Toukan
 <tariqt@...dia.com>, Gal Pressman <gal@...dia.com>, Leon Romanovsky
 <leonro@...dia.com>
Subject: Re: [net-next V3 15/15] Documentation: networking: Add description
 for multi-pf netdev

On Thu, 22 Feb 2024 08:51:36 +0100 Greg Kroah-Hartman wrote:
> On Tue, Feb 20, 2024 at 05:33:09PM -0800, Jakub Kicinski wrote:
> > Greg, we have a feature here where a single device of class net has
> > multiple "bus parents". We used to have one attr under class net
> > (device) which is a link to the bus parent. Now we either need to add
> > more or not bother with the linking of the whole device. Is there any
> > precedent / preference for solving this from the device model
> > perspective?  
> 
> How, logically, can a netdevice be controlled properly from 2 parent
> devices on two different busses?  How is that even possible from a
> physical point-of-view?  What exact bus types are involved here?

Two PCIe buses, two endpoints, two networking ports. It's one piece
of silicon, tho, so the "slices" can talk to each other internally.
The NVRAM configuration tells both endpoints that the user wants
them "bonded", when the PCI drivers probe they "find each other"
using some cookie or DSN or whatnot. And once they did, they spawn
a single netdev.

> This "shouldn't" be possible as in the end, it's usually a PCI device
> handling this all, right?

It's really a special type of bonding of two netdevs. Like you'd bond
two ports to get twice the bandwidth. With the twist that the balancing
is done on NUMA proximity, rather than traffic hash.

Well, plus, the major twist that it's all done magically "for you"
in the vendor driver, and the two "lower" devices are not visible.
You only see the resulting bond.

I personally think that the magic hides as many problems as it
introduces and we'd be better off creating two separate netdevs.
And then a new type of "device bond" on top. Small win that
the "new device bond on top" can be shared code across vendors.

But there's only so many hours in the day to argue with vendors.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ