[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190305221144.GA1758@mellanox.com>
Date: Tue, 5 Mar 2019 22:11:48 +0000
From: Jason Gunthorpe <jgg@...lanox.com>
To: Jakub Kicinski <jakub.kicinski@...ronome.com>
CC: Jiri Pirko <jiri@...nulli.us>,
"davem@...emloft.net" <davem@...emloft.net>,
"oss-drivers@...ronome.com" <oss-drivers@...ronome.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Parav Pandit <parav@...lanox.com>
Subject: Re: [PATCH net-next 4/8] devlink: allow subports on devlink PCI ports
On Mon, Mar 04, 2019 at 06:11:07PM -0800, Jakub Kicinski wrote:
> > At least in RDMA we have drivers doing all combinations of this:
> > multiple ports per BDF, one port per BDF, and one composite RDMA
> > device formed by combining multiple BDFs worth of ports together.
>
> Right, last but not least we have the case where there is one port but
> multiple links (for NUMA, or just because 1 PCIe link can't really cope
> with 200Gbps). In that case which DBDF would the port go to? :(
> Do all internal info of the ASIC (health, regions, sbs) get registered
> twice?
This I don't know, at least for RDMA this configuration gets confusing
very fast and devlink is the least of the worries..
Personally I would advocate for a master/slave kind of arrangement
where the master BDF has a different PCI DID from the slaves. devlink
and other kernel objects hang off the master.
The slave port is then only used to carry selected NUMA aware data
path traffic and doesn't show in devlink.
Jason
Powered by blists - more mailing lists