[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9fda7682-45f0-8cce-c3e4-2d58cba08edb@mellanox.com>
Date: Thu, 2 Aug 2018 18:07:18 +0300
From: Eran Ben Elisha <eranbe@...lanox.com>
To: David Miller <davem@...emloft.net>, jakub.kicinski@...ronome.com
Cc: saeedm@...lanox.com, netdev@...r.kernel.org, jiri@...lanox.com,
alexander.duyck@...il.com, helgaas@...nel.org
Subject: Re: [pull request][net-next 00/10] Mellanox, mlx5 and devlink updates
2018-07-31
On 8/2/2018 4:40 AM, David Miller wrote:
> From: Jakub Kicinski <jakub.kicinski@...ronome.com>
> Date: Wed, 1 Aug 2018 17:00:47 -0700
>
>> On Wed, 1 Aug 2018 14:52:45 -0700, Saeed Mahameed wrote:
>>> - According to the discussion outcome, we are keeping the congestion control
>>> setting as mlx5 device specific for the current HW generation.
>>
>> I still see queuing and marking based on queue level. You want to add
>> a Qdisc that will mirror your HW's behaviour to offload, if you really
>> believe this is not a subset of RED, why not... But devlink params?
>
> I totally agree, devlink seems like absolutely to wrong level and set
> of interfaces to be doing this stuff.
>
> I will not pull these changes in and I probably should have not
> accepted the DCB changes from the other day and they were sneakily
> leading up to this crap.
>
> Sorry, please follow Jakub's lead as I think his approach makes much
> more technical sense than using devlink for this.
>
> Thanks.
>
Hi Dave,
I would like to re-state that this feature was not meant to be a generic
one. This feature was added in order to resolve a HW bug which exist in
a small portion of our devices. Those params will be used only on those
current HWs and won't be in use for our future devices.
During the discussions, several alternatives where offered to be used by
various members of the community. These alternatives includes TC and
enhancements to PCI configuration tools.
Regarding the TC, from my perspective, this is not an option as:
1) The HW mechanism handles multiple functions and therefore cannot be
configured on as a regular TC
2) No PF + representors modeling can be applied here, this is a
MultiHost environment where one host is not aware to the other hosts,
and each is running on its own pci/driver. It is a device working mode
configuration.
3) The current HW W/A is very limited, maybe it has a similar algorithm
as WRED, but is being used for much simpler different use case (pci bus
congestion). It cannot be compared to a standard TC capability
(RED/WRED), and defining it as a offload fully controlled by the user
will be a big misuse. (for example, drop rate cannot be configured)
regarding the PCI config tools, there was a consensus that such tool is
not acceptable as it is not a part of the PCI spec.
Since module param/sysfs/debugfs/etc are no longer acceptable, and
current drivers still desired with a way to do some configurations to
the device/driver which cannot used standard Linux tool or by other
vendors, devlink params was developed (under the assumption that this
tool will be helpful for those needs, and those only).
From my perspective, Devlink is the tool to configure the device for
handling such unexpected bugs, i.e "PCIe buffer congestion handling
workaround".
Thanks,
Eran
Powered by blists - more mailing lists