lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM6PR13MB370531053A394EE41080158FFC339@DM6PR13MB3705.namprd13.prod.outlook.com>
Date:   Thu, 27 Oct 2022 02:11:55 +0000
From:   Yinjun Zhang <yinjun.zhang@...igine.com>
To:     Saeed Mahameed <saeed@...nel.org>
CC:     Jakub Kicinski <kuba@...nel.org>,
        Simon Horman <simon.horman@...igine.com>,
        David Miller <davem@...emloft.net>,
        Paolo Abeni <pabeni@...hat.com>,
        Michael Chan <michael.chan@...adcom.com>,
        Andy Gospodarek <andy@...yhouse.net>,
        Gal Pressman <gal@...dia.com>,
        Jesse Brandeburg <jesse.brandeburg@...el.com>,
        Tony Nguyen <anthony.l.nguyen@...el.com>,
        Edward Cree <ecree.xilinx@...il.com>,
        Vladimir Oltean <vladimir.oltean@....com>,
        Andrew Lunn <andrew@...n.ch>,
        Nole Zhang <peng.zhang@...igine.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        oss-drivers <oss-drivers@...igine.com>
Subject: RE: [PATCH net-next 0/3] nfp: support VF multi-queues configuration

On Wed, 26 Oct 2022 15:22:21 +0100, Saeed Mahameed wrote:
> On 25 Oct 11:39, Yinjun Zhang wrote:
> >On Date: Tue, 25 Oct 2022 12:05:14 +0100, Saeed Mahameed wrote:
> 
> Usually you create the VFs unbound, configure them and then bind them.
> otherwise a query will have to query any possible VF which for some vendors
> can be thousands ! it's better to work on created but not yet deployed vfs

Usually creating and binding are not separated, that's why `sriov_drivers_autoprobe`
is default true, unless some particular configuration requires it, like mlnx's msix
practice. 

> 
> >Two options,
> >one is from VF's perspective, you need configure one by one, very
> straightforward:
> >```
> >pci/xxxx:xx:xx.x:
> >  name max_q size 128 unit entry
> >    resources:
> >      name VF0 size 1 unit entry size_min 1 size_max 128 size_gran 1
> >      name VF1 size 1 unit entry size_min 1 size_max 128 size_gran 1
> >      ...
> 
> the above semantics are really weird,
> VF0 can't be a sub-resource of max_q !

Sorry, I admit the naming is not appropriate here. It should be "total_q_for_VF "
for parent resource, and "q_for_VFx" for the sub resources.

> 
> Note that i called the resource "q_table" and not "max_queues",
> since semantically max_queues is a parameter where q_table can be looked
> at
> as a sub-resource of the VF, the q_table size decides the max_queues a VF
> will accept, so there you go !

Queue itself is a kind of resource, why "q_table"? Just because the unit is entry?
I think we need introduce a new generic unit, so that its usage won't be limited.

> arghh weird.. just make it an attribute for devlink port function and name it
> max_q as god intended it to be ;). Fix your FW to allow changing VF
> maxqueue for
> unbound VFs if needed.
> 

It's not the FW constraints, the reason I don't prefer port way is it needs:
1. separate VF creating and binding, which needs extra operation
2. register extra ports for VFs
Both can be avoided when using resource way.

> 
> >```
> >another is from queue's perspective, several class is supported, not very
> flexible:
> >```
> >pci/xxxx:xx:xx.x:
> >  name max_q_class size 128 unit entry
> >    resources:
> >      # means how many VFs possess max-q-number of 16/8/..1 respectively
> >      name _16 size 0 unit entry size_min 0 size_max 128 size_gran 1
> >      name _8 size 0 unit entry size_min 0 size_max 128 size_gran 1
> >      ...
> >      name _1 size 0 unit entry size_min 0 size_max 128 size_gran 1
> >```
> 
> weirder.

Yes, kind of obscure. The intention is to avoid configuring one by one, especially
when there're thousands of VFs. Any better idea is welcomed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ