lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 9 May 2018 17:22:50 -0700
From:   Michael Chan <michael.chan@...adcom.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc:     Or Gerlitz <gerlitz.or@...il.com>,
        David Miller <davem@...emloft.net>,
        Netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next RFC 1/3] net: Add support to configure SR-IOV VF
 minimum and maximum queues.

On Wed, May 9, 2018 at 4:15 PM, Jakub Kicinski
<jakub.kicinski@...ronome.com> wrote:
> On Wed,  9 May 2018 07:21:41 -0400, Michael Chan wrote:
>> VF Queue resources are always limited and there is currently no
>> infrastructure to allow the admin. on the host to add or reduce queue
>> resources for any particular VF.  With ever increasing number of VFs
>> being supported, it is desirable to allow the admin. to configure queue
>> resources differently for the VFs.  Some VFs may require more or fewer
>> queues due to different bandwidth requirements or different number of
>> vCPUs in the VM.  This patch adds the infrastructure to do that by
>> adding IFLA_VF_QUEUES netlink attribute and a new .ndo_set_vf_queues()
>> to the net_device_ops.
>>
>> Four parameters are exposed for each VF:
>>
>> o min_tx_queues - Guaranteed or current tx queues assigned to the VF.
>
> This muxing of semantics may be a little awkward and unnecessary, would
> it make sense for struct ifla_vf_info to have a separate fields for
> current number of queues and the admin-set guaranteed min?

The loose semantics is mainly to allow some flexibility in
implementation.  Sure, we can tighten the definitions or add
additional fields.

>
> Is there a real world use case for the min value or are you trying to
> make the API feature complete?

In this proposal, these parameters are mainly viewed as the bounds for
the queues that each VF can potentially allocate.  The actual number
of queues chosen by the VF driver or modified by the VF user can be
any number within the bounds.

We currently need to have min and max parameters to support the
different modes we use to distribute the queue resources to the VFs.
In one mode, for example, resources are statically divided and each VF
has a small number of guaranteed queues (min = max).  In a different
mode, we allow more flexible resource allocation with each VF having a
small number of guaranteed queues but a higher number of
non-guaranteed queues (min < max).  Some VFs may be able to allocate
queues much higher than min when resources are still available, while
others may only be able to allocate min queues when resources are used
up.

With min and max exposed, the PF user can properly tweak the resources
for each VF described above.

>
>> o max_tx_queues - Maximum but not necessarily guaranteed tx queues
>>   available to the VF.
>>
>> o min_rx_queues - Guaranteed or current rx queues assigned to the VF.
>>
>> o max_rx_queues - Maximum but not necessarily guaranteed rx queues
>>   available to the VF.
>>
>> The "ip link set" command will subsequently be patched to support the new
>> operation to set the above parameters.
>>
>> After the admin. makes a change to the above parameters, the corresponding
>> VF will have a new range of channels to set using ethtool -L.
>>
>> Signed-off-by: Michael Chan <michael.chan@...adcom.com>
>
> In switchdev mode we can use number of queues on the representor as a
> proxy for max number of queues allowed for the ASIC port.  This works
> better when representors are muxed in the first place than when they
> have actual queues backing them.  WDYT about such scheme, Or?  A very
> pleasant side-effect is that one can configure qdiscs and get stats
> per-HW queue.

This is an interesting approach.  But it doesn't have the min and max
for each VF, and also only works in switchdev mode.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ