[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <n6mey5dbfpw7ykp3wozgtxo5grvac642tskcn4mqknrurhpwy7@ugolzkzzujba>
Date: Mon, 1 Dec 2025 11:50:08 +0100
From: Jiri Pirko <jiri@...nulli.us>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Tariq Toukan <tariqt@...dia.com>, Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Donald Hunter <donald.hunter@...il.com>,
Jonathan Corbet <corbet@....net>, Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>, Mark Bloch <mbloch@...dia.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org, linux-rdma@...r.kernel.org,
Gal Pressman <gal@...dia.com>, Moshe Shemesh <moshe@...dia.com>,
Carolina Jubran <cjubran@...dia.com>, Cosmin Ratiu <cratiu@...dia.com>, Jiri Pirko <jiri@...dia.com>,
Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH net-next V4 02/14] documentation: networking: add shared
devlink documentation
Sat, Nov 29, 2025 at 04:19:24AM +0100, kuba@...nel.org wrote:
>On Fri, 28 Nov 2025 12:00:13 +0100 Jiri Pirko wrote:
>> >> +Shared devlink instances allow multiple physical functions (PFs) on the same
>> >> +chip to share an additional devlink instance for chip-wide operations. This
>> >> +should be implemented within individual drivers alongside the individual PF
>> >> +devlink instances, not replacing them.
>> >> +
>> >> +The shared devlink instance should be backed by a faux device and should
>> >> +provide a common interface for operations that affect the entire chip
>> >> +rather than individual PFs.
>> >
>> >If we go with this we must state very clearly that this is a crutch and
>> >_not_ the recommended configuration...
>>
>> Why "not recommented". If there is a usecase for this in a dirrerent
>> driver, it is probably good to utilize the shared instance, isn't it?
>> Perhaps I'm missing something.
>
>Having a single instance seems preferable from user's point of view.
Sure, if there is no need for sharing, correct.
>
>> >... because presumably we could use this infra to manage a single
>> >devlink instance? Which is what I asked for initially.
>>
>> I'm not sure I follow. If there is only one PF bound, there is 1:1
>> relationship. Depends on how many PFs of the same ASIC you have.
>
>I'm talking about multi-PF devices. mlx5 supports multi-PF setup for
>NUMA locality IIUC. In such configurations per-PF parameters can be
>configured on PCI PF ports.
Correct. IFAIK there is one PF devlink instance per NUMA node. The
shared instance on top would make sense to me. That was one of
motivations to introduce it. Then this shared instance would hold
netdev, vf representors etc.
>
>> >Why can't this mutex live in the core?
>>
>> Well, the mutex protect the list of instances which are managed in the
>> driver. If you want to move the mutex, I don't see how to do it without
>> moving all the code related to shared devlink instances, including faux
>> probe etc. Is that what you suggest?
>
>Multiple ways you can solve it, but drivers should have to duplicate
>all the instance management and locking. BTW please don't use guard().
I'm having troubles to undestand what you say, sorry :/ Do you prefer to
move the code from driver to devlink core or not?
Regarding guard(), sure. I wonder how much more time it's gonna take
since this resistentance fades out :)
Powered by blists - more mailing lists