lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 Sep 2012 14:07:19 +0300
From:	"Yuval Mintz" <yuvalmin@...adcom.com>
To:	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
cc:	"Ariel Elior" <ariele@...adcom.com>,
	"Eilon Greenstein" <eilong@...adcom.com>
Subject: Re: New commands to configure IOV features

>>> Back to the original discussion though--has anyone got any ideas about
>>> the best way to trigger runtime creation of VFs?  I don't know what
>>> the binary APIs looks like, but via sysfs I could see something like
>>>
>>> echo number_of_new_vfs_to_create >
>>> /sys/bus/pci/devices/<address>/create_vfs
>>>
>>> Something else that occurred to me--is there buy-in from driver
>>> maintainers?  I know the Intel ethernet drivers (what I'm most
>>> familiar
>>> with) would need to be substantially modified to support on-the-fly
>>> addition of new vfs.  Currently they assume that the number of vfs is
>>> known at module init time.
>>
>> Why couldn't rtnl_link_ops be used for this. It is already the preferred
>> interface to create vlan's, bond devices, and other virtual devices?
>> The one issue is that do the created VF's exist in kernel as devices or
>> only visible to guest?
> 
> I would say that rtnl_link_ops are network oriented and not appropriate for something like a storage controller or graphics device, which are two other common SR-IOV capable devices.

Hi Dave,

We're currently fine-tuning our SRIOV support, which we will shortly
send upstream.

We've encountered a problem though - all drivers currently supporting
SRIOV do so with the usage of a module param: e.g., 'max_vfs' for ixgbe,
'num_vfs' for benet, etc.
The SRIOV feature is disabled by default on all the drivers; it can only
be enabled via usage of the module param.

We don't want the lack of SRIOV module param in the bnx2x driver to be
the bottle-neck when we'll submit the SRIOV feature upstream, and we
also don't want to enable SRIOV by default (following the same logic of
other drivers; most users don't use SRIOV and it would strain their
resources).

As we see it, there are several possible ways of solving the issue:
 1. Use some network-tool (e.g., ethtool).
 2. Implement a standard sysfs interface for PCIe devices, as SRIOV is
    not solely network-related (this should be done via the PCI linux
    tree).
 3. Implement a module param in our bnx2x code.

We would like to know what's your preferred method for solving this issue,
and to hear if you have another (better?) method by which we can add this
kind of support.

Thanks,
Yuval Mintz


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ