lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Oct 2021 10:16:35 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Saeed Mahameed <saeedm@...dia.com>
Cc:     Jiri Pirko <jiri@...dia.com>, Moshe Shemesh <moshe@...dia.com>,
        Shay Drory <shayd@...dia.com>, Parav Pandit <parav@...dia.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [net-next 10/14] net/mlx5: Let user configure io_eq_size param

On Tue, 26 Oct 2021 15:54:28 +0000 Saeed Mahameed wrote:
> On Tue, 2021-10-26 at 08:05 -0700, Jakub Kicinski wrote:
> > On Mon, 25 Oct 2021 13:54:27 -0700 Saeed Mahameed wrote:  
> > > From: Shay Drory <shayd@...dia.com>
> > > 
> > > Currently, each I/O EQ is taking 128KB of memory. This size
> > > is not needed in all use cases, and is critical with large scale.
> > > Hence, allow user to configure the size of I/O EQs.
> > > 
> > > For example, to reduce I/O EQ size to 64, execute:
> > > $ devlink resource set pci/0000:00:0b.0 path /io_eq_size/ size 64
> > > $ devlink dev reload pci/0000:00:0b.0  
> > 
> > This sort of config is needed by more drivers,
> > we need a standard way of configuring this.
>
> We had a debate internally about the same thing, Jiri and I thought
> that EQ might be a ConnectX only thing (maybe some other vendors have
> it) but it is not really popular

I thought it's a RDMA thing. At least according to grep there's 
a handful of non-MLX drivers which have eqs. Are these not actual
event queues? (huawei/hinic, ibm/ehea, microsoft/mana, qlogic/qed)

> we thought, until other vendors start contributing or asking for 
> the same thing, maybe then we can standardize.

Yeah, like the standardization part ever happens :/ 

Look at the EQE/CQE interrupt generation thing. New vendor comes in and
copies best known practice (which is some driver-specific garbage,
ethtool priv-flags in that case). The maintainer (me) has to be the
policeman remember all those knobs with prior art and push back. Most
of the time the vendor decides to just keep the knob out of tree at
this point, kudos to Hauwei for not giving up. New vendor implements
the API, none of the existing vendors provide reviews or feedback.
Then none of the existing vendors implements the now-standard API.
Someone working for a large customer (me, again) has to go and ask 
for the API to be implemented. Which takes months even tho the patches
should be trivial.

If the initial patches adding the cqe/eqe interrupt modes to priv-flags
were nacked and the standard API created we'd all have saved much time.

> > Sorry, I didn't have the time to look thru your patches
> > yesterday, I'm sending a revert for all your new devlink
> > params.  
> 
> Sure, we will submit a RFC to give other vendors a chance to comment,
> it will be basically the same patch (devlink resource) while making the
> parameters vendor generic.

IDK if resource is a right fit (as mentioned to Parav in the discussion
on the revert).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ