[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180718184947.6e472ee4@cakuba.netronome.com>
Date: Wed, 18 Jul 2018 18:49:47 -0700
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: Saeed Mahameed <saeedm@...lanox.com>, Jiri Pirko <jiri@...nulli.us>
Cc: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
Eran Ben Elisha <eranbe@...lanox.com>
Subject: Re: [net-next 10/16] net/mlx5: Support PCIe buffer congestion
handling via Devlink
On Wed, 18 Jul 2018 18:01:01 -0700, Saeed Mahameed wrote:
> +static const struct devlink_param mlx5_devlink_params[] = {
> + DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_CONGESTION_ACTION,
> + "congestion_action",
> + DEVLINK_PARAM_TYPE_U8,
> + BIT(DEVLINK_PARAM_CMODE_RUNTIME),
> + mlx5_devlink_get_congestion_action,
> + mlx5_devlink_set_congestion_action, NULL),
> + DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_CONGESTION_MODE,
> + "congestion_mode",
> + DEVLINK_PARAM_TYPE_U8,
> + BIT(DEVLINK_PARAM_CMODE_RUNTIME),
> + mlx5_devlink_get_congestion_mode,
> + mlx5_devlink_set_congestion_mode, NULL),
> +};
The devlink params haven't been upstream even for a full cycle and
already you guys are starting to use them to configure standard
features like queuing.
I know your HW is not capable of doing full RED offload, it's a
snowflake. You tell us you're doing custom DCB configuration hacks on
one side (previous argument we had) and custom devlink parameter
configuration hacks on PCIe.
Perhaps the idea that we're trying to use the existing Linux APIs for
HW configuration only applies to forwarding behaviour.
Powered by blists - more mailing lists