[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200331103255.549ea899@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Tue, 31 Mar 2020 10:32:55 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Parav Pandit <parav@...lanox.com>
Cc: Jiri Pirko <jiri@...nulli.us>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
Yuval Avnery <yuvalav@...lanox.com>,
"jgg@...pe.ca" <jgg@...pe.ca>,
Saeed Mahameed <saeedm@...lanox.com>,
"leon@...nel.org" <leon@...nel.org>,
"andrew.gospodarek@...adcom.com" <andrew.gospodarek@...adcom.com>,
"michael.chan@...adcom.com" <michael.chan@...adcom.com>,
Moshe Shemesh <moshe@...lanox.com>,
Aya Levin <ayal@...lanox.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Vlad Buslov <vladbu@...lanox.com>,
Yevgeny Kliteynik <kliteyn@...lanox.com>,
"dchickles@...vell.com" <dchickles@...vell.com>,
"sburla@...vell.com" <sburla@...vell.com>,
"fmanlunas@...vell.com" <fmanlunas@...vell.com>,
Tariq Toukan <tariqt@...lanox.com>,
"oss-drivers@...ronome.com" <oss-drivers@...ronome.com>,
"snelson@...sando.io" <snelson@...sando.io>,
"drivers@...sando.io" <drivers@...sando.io>,
"aelior@...vell.com" <aelior@...vell.com>,
"GR-everest-linux-l2@...vell.com" <GR-everest-linux-l2@...vell.com>,
"grygorii.strashko@...com" <grygorii.strashko@...com>,
mlxsw <mlxsw@...lanox.com>, Ido Schimmel <idosch@...lanox.com>,
Mark Zhang <markz@...lanox.com>,
"jacob.e.keller@...el.com" <jacob.e.keller@...el.com>,
Alex Vesker <valex@...lanox.com>,
"linyunsheng@...wei.com" <linyunsheng@...wei.com>,
"lihong.yang@...el.com" <lihong.yang@...el.com>,
"vikas.gupta@...adcom.com" <vikas.gupta@...adcom.com>,
"magnus.karlsson@...el.com" <magnus.karlsson@...el.com>
Subject: Re: [RFC] current devlink extension plan for NICs
On Tue, 31 Mar 2020 07:45:51 +0000 Parav Pandit wrote:
> > In fact very little belongs to the port in that model. So why have
> > PCI ports in the first place?
> >
> for few reasons.
> 1. PCI ports are establishing the relationship between eswitch port and
> its representor netdevice.
> Relying on plain netdev name doesn't work in certain pci topology where
> netdev name exceeds 15 characters.
> 2. health reporters can be at port level.
Why? The health reporters we have not AFAIK are for FW and for queues
hanging. Aren't queues on the slice and FW on the device?
> 3. In future at eswitch pci port, I will be adding dpipe support for the
> internal flow tables done by the driver.
> 4. There were inconsistency among vendor drivers in using/abusing
> phys_port_name of the eswitch ports. This is consolidated via devlink
> port in core. This provides consistent view among all vendor drivers.
>
> So PCI eswitch side ports are useful regardless of slice.
>
> >> Additionally devlink port object doesn't go through the same state
> >> machine as that what slice has to go through.
> >> So its weird that some devlink port has state machine and some doesn't.
> >
> > You mean for VFs? I think you can add the states to the API.
> >
> As we agreed above that eswitch side objects (devlink port and
> representor netdev) should not be used for 'portion of device',
We haven't agreed, I just explained how we differ.
Powered by blists - more mailing lists