[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UfG-4zuKArCbSLkZoCKs2-t6r9U4qQPPUvBk-e_5_3FCg@mail.gmail.com>
Date: Wed, 25 Jul 2018 08:23:26 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Eran Ben Elisha <eranbe@...lanox.com>
Cc: Jakub Kicinski <jakub.kicinski@...ronome.com>,
Saeed Mahameed <saeedm@...lanox.com>,
Jiri Pirko <jiri@...nulli.us>,
"David S. Miller" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [net-next 10/16] net/mlx5: Support PCIe buffer congestion
handling via Devlink
On Wed, Jul 25, 2018 at 5:31 AM, Eran Ben Elisha <eranbe@...lanox.com> wrote:
>
>
> On 7/24/2018 10:51 PM, Jakub Kicinski wrote:
>>>>
>>>>
>>>> The devlink params haven't been upstream even for a full cycle and
>>>> already you guys are starting to use them to configure standard
>>>> features like queuing.
>>>
>>>
>>> We developed the devlink params in order to support non-standard
>>> configuration only. And for non-standard, there are generic and vendor
>>> specific options.
>>
>>
>> I thought it was developed for performing non-standard and possibly
>> vendor specific configuration. Look at DEVLINK_PARAM_GENERIC_* for
>> examples of well justified generic options for which we have no
>> other API. The vendor mlx4 options look fairly vendor specific if you
>> ask me, too.
>>
>> Configuring queuing has an API. The question is it acceptable to enter
>> into the risky territory of controlling offloads via devlink parameters
>> or would we rather make vendors take the time and effort to model
>> things to (a subset) of existing APIs. The HW never fits the APIs
>> perfectly.
>
>
> I understand what you meant here, I would like to highlight that this
> mechanism was not meant to handle SRIOV, Representors, etc.
> The vendor specific configuration suggested here is to handle a congestion
> state in Multi Host environment (which includes PF and multiple VFs per
> host), where one host is not aware to the other hosts, and each is running
> on its own pci/driver. It is a device working mode configuration.
>
> This couldn't fit into any existing API, thus creating this vendor specific
> unique API is needed.
If we are just going to start creating devlink interfaces in for every
one-off option a device wants to add why did we even bother with
trying to prevent drivers from using sysfs? This just feels like we
are back to the same arguments we had back in the day with it.
I feel like the bigger question here is if devlink is how we are going
to deal with all PCIe related features going forward, or should we
start looking at creating a new interface/tool for PCI/PCIe related
features? My concern is that we have already had features such as DMA
Coalescing that didn't really fit into anything and now we are
starting to see other things related to DMA and PCIe bus credits. I'm
wondering if we shouldn't start looking at a tool/interface to
configure all the PCIe related features such as interrupts, error
reporting, DMA configuration, power management, etc. Maybe we could
even look at sharing it across subsystems and include things like
storage, graphics, and other subsystems in the conversation.
- Alex
Powered by blists - more mailing lists