lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 25 Mar 2015 14:32:25 +0000
From:	"Tantilov, Emil S" <emil.s.tantilov@...el.com>
To:	Vlad Zolotarov <vladz@...udius-systems.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	"davem@...emloft.net" <davem@...emloft.net>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"nhorman@...hat.com" <nhorman@...hat.com>,
	"sassmann@...hat.com" <sassmann@...hat.com>,
	"jogreene@...hat.com" <jogreene@...hat.com>
Subject: RE: [net-next 05/16] ixgbevf: enable multiple queue support

>-----Original Message-----
>From: Vlad Zolotarov [mailto:vladz@...udius-systems.com] 
>Sent: Wednesday, March 25, 2015 1:45 AM
>Subject: Re: [net-next 05/16] ixgbevf: enable multiple queue support
>
>
>On 02/05/15 11:35, Jeff Kirsher wrote:
>> From: Emil Tantilov <emil.s.tantilov@...el.com>
>>
> This patch enables multiple queues and RSS support for the VF.
>> Maximum of 2 queues are supported due to available vectors.
>>
>> Signed-off-by: Emil Tantilov <emil.s.tantilov@...el.com>
>> Tested-by: Krishneil Singh <Krishneil.k.singh@...el.com>
>> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@...el.com>
>> ---
>>   drivers/net/ethernet/intel/ixgbevf/ixgbevf.h      |  1 +
>>   drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 22 +++++++++++++++++++---
>>   2 files changed, 20 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
>> index 8c44ab2..65cda34 100644
>> --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
>> +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
>> @@ -124,6 +124,7 @@ struct ixgbevf_ring {
>>   
>>   #define MAX_RX_QUEUES IXGBE_VF_MAX_RX_QUEUES
>>   #define MAX_TX_QUEUES IXGBE_VF_MAX_TX_QUEUES
>> +#define IXGBEVF_MAX_RSS_QUEUES 2
>
>According to the MRQE register description in 82599 and x540 specs 2 
> (the value of IXGBEVF_MAX_RSS_QUEUES) is the minimal allowed number of 
>queues in a single pool thus it's a minimal number of VF queues - not 
>the maximal (see more comments on this below).

The VF driver can have a maximum of 2 queues as mentioned in the description.
The driver will attempt to allocate the maximum number of queues allowed.

All of the virtualization enabled modes of MRQC.MRQE have at least 2 queues.

>>   #define IXGBEVF_DEFAULT_TXD   1024
>>   #define IXGBEVF_DEFAULT_RXD   512
>> diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c >>b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
>> index c9b49bf..815808f 100644
>> --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
>> +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
>> @@ -1794,7 +1794,8 @@ static int ixgbevf_configure_dcb(struct ixgbevf_adapter *adapter)
>>   	struct ixgbe_hw *hw = &adapter->hw;
>>   	unsigned int def_q = 0;
>>   	unsigned int num_tcs = 0;
>> -	unsigned int num_rx_queues = 1;
>> +	unsigned int num_rx_queues = adapter->num_rx_queues;
>> +	unsigned int num_tx_queues = adapter->num_tx_queues;
>>   	int err;
>>   
>>   	spin_lock_bh(&adapter->mbx_lock);
>> @@ -1808,6 +1809,9 @@ static int ixgbevf_configure_dcb(struct ixgbevf_adapter *adapter)
>>   		return err;
>>   
>>   	if (num_tcs > 1) {
>> +		/* we need only one Tx queue */
>> +		num_tx_queues = 1;
>> +
>>   		/* update default Tx ring register index */
>>   		adapter->tx_ring[0]->reg_idx = def_q;
>>   
>> @@ -1816,7 +1820,8 @@ static int ixgbevf_configure_dcb(struct ixgbevf_adapter *adapter)
>>   	}
>>   
>>   	/* if we have a bad config abort request queue reset */
>> -	if (adapter->num_rx_queues != num_rx_queues) {
>> +	if ((adapter->num_rx_queues != num_rx_queues) ||
>> +	    (adapter->num_tx_queues != num_tx_queues)) {
>>   		/* force mailbox timeout to prevent further messages */
>>   		hw->mbx.timeout = 0;
>>   
>> @@ -2181,8 +2186,19 @@ static void ixgbevf_set_num_queues(struct ixgbevf_adapter *adapter)
>>   		return;
>>   
>>   	/* we need as many queues as traffic classes */
>> -	if (num_tcs > 1)
>> +	if (num_tcs > 1) {
>>   		adapter->num_rx_queues = num_tcs;
>> +	} else {
>> +		u16 rss = min_t(u16, num_online_cpus(), IXGBEVF_MAX_RSS_QUEUES);
>
>Emil,
>Here u ignore the IXGBE_VF_RX_QUEUES value returned the PF (stored in 
>the hw->mac.max_rx_queues).

Again the VF driver allocates the maximum number of queues possible it is not driven by the PF.

>This will accidentally work if PF is driven 
>by a current ixgbe Linux driver but what would happen if HV is not a 
>Linux box?

Do you know of a driver that has an issue in this configuration?

>What if HV decides to configure the PSRTYPE[x].RQPL to 0 and 
>decide that VF should be configured with a single queue? I maybe miss 
>something here but the above line seems bogus. Please, comment.

PSRTYPE.RQPL - this field only has meaning when MRQC.MRQE is configured to allow multiple queues, so if I have to guess I would say that if there was such a driver it would be miss-configured, but even then the VF can have 2 queues, RSS just likely won't work.

Thanks,
Emil



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ