lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Oct 2017 14:37:51 +0000
From:   Yuval Mintz <yuvalm@...lanox.com>
To:     Steve Lin <steven.lin1@...adcom.com>
CC:     Jiri Pirko <jiri@...nulli.us>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Jiri Pirko <jiri@...lanox.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "michael.chan@...adcom.com" <michael.chan@...adcom.com>,
        "linville@...driver.com" <linville@...driver.com>,
        "gospo@...adcom.com" <gospo@...adcom.com>
Subject: RE: [PATCH net-next v2 5/6] devlink: Adding num MSI-X vectors per VF
 NVRAM config param

> >> On Fri, Oct 20, 2017 at 10:10 AM, Jiri Pirko <jiri@...nulli.us> wrote:
> >> > Fri, Oct 20, 2017 at 04:03:55PM CEST, steven.lin1@...adcom.com wrote:
> >> >>On Thu, Oct 19, 2017 at 5:39 PM, Jiri Pirko <jiri@...nulli.us> wrote:
> >> >>> Thu, Oct 19, 2017 at 10:32:21PM CEST, yuvalm@...lanox.com wrote:
> >> >>>>> Adding DEVLINK_PERM_CONFIG_MSIX_VECTORS_PER_VF
> >> permanent
> >> >>>>> config
> >> >>>>> parameter.  Defines number of MSI-X vectors allocated per VF.
> >> >>>>> Value is permanent (stored in NVRAM), so becomes the new
> default
> >> >>>>> value for this device.
> >> >>>>
> >> >>>>Sounds like you're having this enforce the same configuration for all
> >> child VFs.
> >> >>>
> >> >>> Yeah, this sounds like per-port config.
> >> >>>
> >> >>
> >> >>Well, it gets a little tricky here.  I assume some cards handle this
> >> >>per-port.  Other cards might handle this per PF, where PF may not
> >> >>always correspond 1:1 with a port.  And some cards maybe just allow a
> >> >>single value for this parameter for the entire card, covering all
> >> >>ports/PFs.
> >> >>
> >> >>To keep things simple and as general as possible, it made sense to set
> >> >>all parameters on a per-PCI device level.  As I mentioned in my
> >> >>cover-letter, the devices most likely to use these proposed commands
> >> >>do not have a single "whole asic" PCI b/d/f with internal mechanism
> >> >>for accessing ports - most expose each port (and each function on each
> >> >>port) as a separate PCI b/d/f, with no separate "whole asic" PCI
> >> >>b/d/f.  That's how the BCM cards work, and I think that's how the
> MLNX
> >> >>cards work, and others that would be likely to use these cmds.
> >> >>
> >> >>So, to summarize, you direct the command to the PCI b/d/f you want
> to
> >> >>target.  Does this make sense?
> >> >
> >> > So you plan to have 1 devlink instance for each vf? Not sure that
> >> > does sound right to me :/
> >> >
> >>
> >> For the commands proposed in this patchset, AFAIK they all apply on a
> >> per-PF or broader, i.e. per-port or whole-card, granularity, since
> >> they affect permanent config that applies at boot-up.  So, no, the VFs
> >> don't really come into play here.
> >
> > Regardless of whether you're planning on having VFs as devlink instances,
> > the actual attribute question remains -
> > you're proposing an attribute that forces all VFs to have the same value.
> > This probably suits your PCI core limitations but other vendors might have
> > a different capability set, and accepting this design limitation now would
> > muck all future extension attempts of such attributes.
> >
> > I think VF configurations should be planned in advance for supporting a
> > per-VF Configuration whenever it's possible - even if not required
> [/possible]
> > by the one pushing the new attribute.
> >
> 
> The commands being added in this patch are for permanent (i.e. NVRAM)
> config - essentially setting the new default values for various
> features of the device at boot-up.  At that initialization time, no
> VFs are yet instantiated.
> 
> So my perspective was, in general (not just for our specific device /
> design), it doesn't seem like permanent config parameters would be set
> on individual VFs.  That was what my previous comment was trying to
> convey.

That's an odd assumption; Why should you assume there's some device
that allows configuring persistent behavior for all VFs but think no other
would set the same on a per-VF basis?

> If that assumption is wrong, though, and there is some device that has
> NVRAM config that is set per-VF, I assume the user would instantiate
> the VF and then call the devlink API on the pci device corresponding
> to the VF they with to affect, and I think the model proposed still
> works.

What would be the purpose of re-configuring a value reflected in the
PCI device for an already instantiated VF?

> Are you suggesting adding a mechanism to set NVRAM parameters on a
> per-VF basis, without instantiating the VF first?  I would prefer not
> adding such a mechanism unless/until there's a use case for it.

The thing is that you're suggesting a new UAPI; We don't have the leisure
of pushing a partial implementation and changing it later on.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ