lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 30 Mar 2018 09:54:37 -0700
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Christoph Hellwig <hch@...radead.org>
Cc:     Jakub Kicinski <jakub.kicinski@...ronome.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>, linux-pci@...r.kernel.org,
        Netdev <netdev@...r.kernel.org>,
        Sathya Perla <sathya.perla@...adcom.com>,
        Felix Manlunas <felix.manlunas@...iumnetworks.com>,
        John Fastabend <john.fastabend@...il.com>,
        Jacob Keller <jacob.e.keller@...el.com>,
        Donald Dutile <ddutile@...hat.com>, oss-drivers@...ronome.com
Subject: Re: [PATCH] PCI: allow drivers to limit the number of VFs to 0

On Fri, Mar 30, 2018 at 4:49 AM, Christoph Hellwig <hch@...radead.org> wrote:
> On Thu, Mar 29, 2018 at 11:22:31AM -0700, Jakub Kicinski wrote:
>> Some user space depends on driver allowing sriov_totalvfs to be
>> enabled.
>
> I can't make sene of this sentence.  Can you explain what user space
> code depends on what semantics?  The sriov_totalvfs file should show
> up for any device supporting SR-IOV as far as I can tell.
>
>>
>> For devices which VF support depends on loaded FW we
>> have the pci_sriov_{g,s}et_totalvfs() API.  However, this API
>> uses 0 as a special "unset" value, meaning drivers can't limit
>> sriov_totalvfs to 0.  Change the special value to be U16_MAX.
>> Use simple min() to determine actual totalvfs.
>
> Please use a PCI_MAX_VFS or similar define instead of plain U16_MAX or ~0.

Actually is there any reason why driver_max_VFs couldn't just be
initialized to the same value as total_VFs?

Also looking over the patch I don't see how writing ~0 would be
accepted unless you also make changes to pci_sriov_set_totalvfs since
it should fail the "numvfs > dev->sriov->total_VFs" check. You might
just want to look at adding a new function that would reset the
driver_max_VFs value instead of trying to write it to an arbitrary
value from the driver.

- Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ