lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACKFLi=8ccry7fasGdTOvDMPNs588brCpdftSpv6vWHY-WM-eA@mail.gmail.com>
Date:   Wed, 30 May 2018 00:18:39 -0700
From:   Michael Chan <michael.chan@...adcom.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc:     "Samudrala, Sridhar" <sridhar.samudrala@...el.com>,
        David Miller <davem@...emloft.net>,
        Netdev <netdev@...r.kernel.org>,
        Or Gerlitz <gerlitz.or@...il.com>
Subject: Re: [PATCH net-next 1/3] net: Add support to configure SR-IOV VF
 minimum and maximum queues.

On Tue, May 29, 2018 at 11:33 PM, Jakub Kicinski
<jakub.kicinski@...ronome.com> wrote:

>
> At some points you (Broadcom) were working whole bunch of devlink
> configuration options for the PCIe side of the ASIC.  The number of
> queues relates to things like number of allocated MSI-X vectors, which
> if memory serves me was in your devlink patch set.  In an ideal world
> we would try to keep all those in one place :)

Yeah, another colleague is now working with Mellanox on something similar.

One difference between those devlink parameters and these queue
parameters is that the former are more permanent and global settings.
For example, number of VFs or number of MSIX per VF are persistent
settings once they are set and after PCIe reset.  On the other hand,
these queue settings are pure run-time settings and may be unique for
each VF.  These are not stored as there is no room in NVRAM to store
128 sets or more of these parameters.

Anyway, let me discuss this with my colleague to see if there is a
natural fit for these queue parameters in the devlink infrastructure
that they are working on.

>
> For PCIe config there is always the question of what can be configured
> at runtime, and what requires a HW reset.  Therefore that devlink API
> which could configure current as well as persistent device settings was
> quite nice.  I'm not sure if reallocating queues would ever require
> PCIe block reset but maybe...  Certainly it seems the notion of min
> queues would make more sense in PCIe configuration devlink API than
> ethtool channel API to me as well.
>
> Queues are in the grey area between netdev and non-netdev constructs.
> They make sense both from PCIe resource allocation perspective (i.e.
> devlink PCIe settings) and netdev perspective (ethtool) because they
> feed into things like qdisc offloads, maybe per-queue stats etc.
>
> So yes...  IMHO it would be nice to add this to a devlink SR-IOV config
> API and/or switchdev representors.  But neither of those are really an
> option for you today so IDK :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ