lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UdepdJZ=QMxaAtquuUM421jkXsj8km588rooFpZRgbuVQ@mail.gmail.com>
Date: Thu, 11 Apr 2024 09:46:13 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Jiri Pirko <jiri@...nulli.us>
Cc: Andrew Lunn <andrew@...n.ch>, Jakub Kicinski <kuba@...nel.org>, pabeni@...hat.com, 
	John Fastabend <john.fastabend@...il.com>, Alexander Lobakin <aleksander.lobakin@...el.com>, 
	Florian Fainelli <f.fainelli@...il.com>, Daniel Borkmann <daniel@...earbox.net>, 
	Edward Cree <ecree.xilinx@...il.com>, netdev@...r.kernel.org, bhelgaas@...gle.com, 
	linux-pci@...r.kernel.org, Alexander Duyck <alexanderduyck@...com>, 
	Willem de Bruijn <willemdebruijn.kernel@...il.com>
Subject: Re: [net-next PATCH 00/15] eth: fbnic: Add network driver for Meta
 Platforms Host Network Interface

On Wed, Apr 10, 2024 at 11:39 PM Jiri Pirko <jiri@...nulli.us> wrote:
>
> Wed, Apr 10, 2024 at 11:07:02PM CEST, alexander.duyck@...il.com wrote:
> >On Wed, Apr 10, 2024 at 1:01 PM Andrew Lunn <andrew@...n.ch> wrote:
> >>
> >> On Wed, Apr 10, 2024 at 08:56:31AM -0700, Alexander Duyck wrote:
> >> > On Tue, Apr 9, 2024 at 4:42 PM Andrew Lunn <andrew@...n.ch> wrote:
> >> > >
> >> > > > What is less clear to me is what do we do about uAPI / core changes.
> >> > >
> >> > > I would differentiate between core change and core additions. If there
> >> > > is very limited firmware on this device, i assume Linux is managing
> >> > > the SFP cage, and to some extend the PCS. Extending the core to handle
> >> > > these at higher speeds than currently supported would be one such core
> >> > > addition. I've no problem with this. And i doubt it will be a single
> >> > > NIC using such additions for too long. It looks like ClearFog CX LX2
> >> > > could make use of such extensions as well, and there are probably
> >> > > other boards and devices, maybe the Zynq 7000?
> >> >
> >> > The driver on this device doesn't have full access over the PHY.
> >> > Basically we control everything from the PCS north, and the firmware
> >> > controls everything from the PMA south as the physical connection is
> >> > MUXed between 4 slices. So this means the firmware also controls all
> >> > the I2C and the QSFP and EEPROM. The main reason for this is that
> >> > those blocks are shared resources between the slices, as such the
> >> > firmware acts as the arbitrator for 4 slices and the BMC.
> >>
> >> Ah, shame. You took what is probably the least valuable intellectual
> >> property, and most shareable with the community and locked it up in
> >> firmware where nobody can use it.
> >>
> >> You should probably stop saying there is not much firmware with this
> >> device, and that Linux controls it. It clearly does not...
> >>
> >>         Andrew
> >
> >Well I was referring more to the data path level more than the phy
> >configuration. I suspect different people have different levels of
> >expectations on what minimal firmware is. With this hardware we at
> >least don't need to use firmware commands to enable or disable queues,
> >get the device stats, or update a MAC address.
> >
> >When it comes to multi-host NICs I am not sure there are going to be
> >any solutions that don't have some level of firmware due to the fact
>
> A small linux host on the nic that controls the eswitch perhaps? I mean,
> the multi pf nic without a host in charge of the physical port and
> swithing between it and pf is simply broken design. And yeah you would
> probably now want to argue others are doing it already in the same way :)
> True that.

Well in our case there isn't an eswitch. The issue is more the logic
for the Ethernet PHY isn't setup to only run one port. Instead the PHY
is MUXed over 2 ports per interface, and then the QSFP interface
itself is spread over 4 ports.

What you end up with is something like the second to last image in
this article[1] where you have a MAC/PCS pair per host sitting on top
of one PMA with some blocks that are shared between the hosts and some
that are not. The issue becomes management of access to the QSFP and
PHY and how to prevent one host from being able to monopolize the
PHY/QSFP or crash the others if something goes sideways. Then you have
to also add in the BMC management on top of that.

[1]: https://semiengineering.com/integrated-ethernet-pcs-and-phy-ip-for-400g-800g-hyperscale-data-centers/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ