[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251128191924.7c54c926@kernel.org>
Date: Fri, 28 Nov 2025 19:19:24 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Jiri Pirko <jiri@...nulli.us>
Cc: Tariq Toukan <tariqt@...dia.com>, Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Donald Hunter
<donald.hunter@...il.com>, Jonathan Corbet <corbet@....net>, Saeed Mahameed
<saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Mark Bloch
<mbloch@...dia.com>, netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-rdma@...r.kernel.org, Gal Pressman
<gal@...dia.com>, Moshe Shemesh <moshe@...dia.com>, Carolina Jubran
<cjubran@...dia.com>, Cosmin Ratiu <cratiu@...dia.com>, Jiri Pirko
<jiri@...dia.com>, Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH net-next V4 02/14] documentation: networking: add shared
devlink documentation
On Fri, 28 Nov 2025 12:00:13 +0100 Jiri Pirko wrote:
> >> +Shared devlink instances allow multiple physical functions (PFs) on the same
> >> +chip to share an additional devlink instance for chip-wide operations. This
> >> +should be implemented within individual drivers alongside the individual PF
> >> +devlink instances, not replacing them.
> >> +
> >> +The shared devlink instance should be backed by a faux device and should
> >> +provide a common interface for operations that affect the entire chip
> >> +rather than individual PFs.
> >
> >If we go with this we must state very clearly that this is a crutch and
> >_not_ the recommended configuration...
>
> Why "not recommented". If there is a usecase for this in a dirrerent
> driver, it is probably good to utilize the shared instance, isn't it?
> Perhaps I'm missing something.
Having a single instance seems preferable from user's point of view.
> >... because presumably we could use this infra to manage a single
> >devlink instance? Which is what I asked for initially.
>
> I'm not sure I follow. If there is only one PF bound, there is 1:1
> relationship. Depends on how many PFs of the same ASIC you have.
I'm talking about multi-PF devices. mlx5 supports multi-PF setup for
NUMA locality IIUC. In such configurations per-PF parameters can be
configured on PCI PF ports.
> >Why can't this mutex live in the core?
>
> Well, the mutex protect the list of instances which are managed in the
> driver. If you want to move the mutex, I don't see how to do it without
> moving all the code related to shared devlink instances, including faux
> probe etc. Is that what you suggest?
Multiple ways you can solve it, but drivers should have to duplicate
all the instance management and locking. BTW please don't use guard().
Powered by blists - more mailing lists