lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Jul 2018 12:51:35 -0700
From:   Jakub Kicinski <jakub.kicinski@...ronome.com>
To:     Eran Ben Elisha <eranbe@...lanox.com>
Cc:     Saeed Mahameed <saeedm@...lanox.com>,
        Jiri Pirko <jiri@...nulli.us>,
        "David S. Miller" <davem@...emloft.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [net-next 10/16] net/mlx5: Support PCIe buffer congestion
 handling via Devlink

On Tue, 24 Jul 2018 13:31:28 +0300, Eran Ben Elisha wrote:
> On 7/19/2018 4:49 AM, Jakub Kicinski wrote:
> > On Wed, 18 Jul 2018 18:01:01 -0700, Saeed Mahameed wrote:  
> >> +static const struct devlink_param mlx5_devlink_params[] = {
> >> +	DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_CONGESTION_ACTION,
> >> +			     "congestion_action",
> >> +			     DEVLINK_PARAM_TYPE_U8,
> >> +			     BIT(DEVLINK_PARAM_CMODE_RUNTIME),
> >> +			     mlx5_devlink_get_congestion_action,
> >> +			     mlx5_devlink_set_congestion_action, NULL),
> >> +	DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_CONGESTION_MODE,
> >> +			     "congestion_mode",
> >> +			     DEVLINK_PARAM_TYPE_U8,
> >> +			     BIT(DEVLINK_PARAM_CMODE_RUNTIME),
> >> +			     mlx5_devlink_get_congestion_mode,
> >> +			     mlx5_devlink_set_congestion_mode, NULL),
> >> +};  
> > 
> > The devlink params haven't been upstream even for a full cycle and
> > already you guys are starting to use them to configure standard
> > features like queuing.  
> 
> We developed the devlink params in order to support non-standard 
> configuration only. And for non-standard, there are generic and vendor 
> specific options.

I thought it was developed for performing non-standard and possibly
vendor specific configuration.  Look at DEVLINK_PARAM_GENERIC_* for
examples of well justified generic options for which we have no
other API.  The vendor mlx4 options look fairly vendor specific if you
ask me, too.

Configuring queuing has an API.  The question is it acceptable to enter
into the risky territory of controlling offloads via devlink parameters
or would we rather make vendors take the time and effort to model
things to (a subset) of existing APIs.  The HW never fits the APIs
perfectly.

> The queuing model is a standard. However here we are configuring the 
> outbound PCIe buffers on the receive path from NIC port toward the 
> host(s) in Single / MultiHost environment.

That's why we have PF representors.

> (You can see the driver processing based on this param as part of the RX 
> patch for the marked option here https://patchwork.ozlabs.org/patch/945998/)
>
> > I know your HW is not capable of doing full RED offload, it's a
> > snowflake.   
> 
> The algorithm which is applied here for the drop option is not the core 
> of this feature.
> 
> > You tell us you're doing custom DCB configuration hacks on
> > one side (previous argument we had) and custom devlink parameter
> > configuration hacks on PCIe.
> > 
> > Perhaps the idea that we're trying to use the existing Linux APIs for
> > HW configuration only applies to forwarding behaviour.  
> 
> Hopefully I explained above well why it is not related.

Sure ;)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ