lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 6 Jun 2023 09:23:02 -0600
From: Ahmed Zaki <ahmed.zaki@...el.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
CC: Tony Nguyen <anthony.l.nguyen@...el.com>, <davem@...emloft.net>,
	<kuba@...nel.org>, <pabeni@...hat.com>, <edumazet@...gle.com>,
	<netdev@...r.kernel.org>, Rafal Romanowski <rafal.romanowski@...el.com>
Subject: Re: [PATCH net-next 3/3] iavf: remove mask from
 iavf_irq_enable_queues()


On 2023-06-06 04:26, Maciej Fijalkowski wrote:
> On Mon, Jun 05, 2023 at 01:56:48PM -0600, Ahmed Zaki wrote:
>> On 2023-06-05 13:25, Maciej Fijalkowski wrote:
>>> On Fri, Jun 02, 2023 at 10:13:02AM -0700, Tony Nguyen wrote:
>>>> From: Ahmed Zaki <ahmed.zaki@...el.com>
>>>>
>>>> Enable more than 32 IRQs by removing the u32 bit mask in
>>>> iavf_irq_enable_queues(). There is no need for the mask as there are no
>>>> callers that select individual IRQs through the bitmask. Also, if the PF
>>>> allocates more than 32 IRQs, this mask will prevent us from using all of
>>>> them.
>>>>
>>>> The comment in iavf_register.h is modified to show that the maximum
>>>> number allowed for the IRQ index is 63 as per the iAVF standard 1.0 [1].
>>> please use imperative mood:
>>> "modify the comment in..."
>>>
>>> besides, it sounds to me like a bug, we were not following the spec, no?
>> yes, but all PF's were allocating  <= 16 IRQs, so it was not causing any
>> issues.
>>
>>
>>>> link: [1] https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf
>>>> Signed-off-by: Ahmed Zaki <ahmed.zaki@...el.com>
>>>> Tested-by: Rafal Romanowski <rafal.romanowski@...el.com>
>>>> Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
>>> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
>>>
>>>> ---
>>>>    drivers/net/ethernet/intel/iavf/iavf.h          |  2 +-
>>>>    drivers/net/ethernet/intel/iavf/iavf_main.c     | 15 ++++++---------
>>>>    drivers/net/ethernet/intel/iavf/iavf_register.h |  2 +-
>>>>    3 files changed, 8 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
>>>> index 9abaff1f2aff..39d0fe76a38f 100644
>>>> --- a/drivers/net/ethernet/intel/iavf/iavf.h
>>>> +++ b/drivers/net/ethernet/intel/iavf/iavf.h
>>>> @@ -525,7 +525,7 @@ void iavf_set_ethtool_ops(struct net_device *netdev);
>>>>    void iavf_update_stats(struct iavf_adapter *adapter);
>>>>    void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
>>>>    int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
>>>> -void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask);
>>>> +void iavf_irq_enable_queues(struct iavf_adapter *adapter);
>>>>    void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
>>>>    void iavf_free_all_rx_resources(struct iavf_adapter *adapter);
>>>> diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
>>>> index 3a78f86ba4f9..1332633f0ca5 100644
>>>> --- a/drivers/net/ethernet/intel/iavf/iavf_main.c
>>>> +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
>>>> @@ -359,21 +359,18 @@ static void iavf_irq_disable(struct iavf_adapter *adapter)
>>>>    }
>>>>    /**
>>>> - * iavf_irq_enable_queues - Enable interrupt for specified queues
>>>> + * iavf_irq_enable_queues - Enable interrupt for all queues
>>>>     * @adapter: board private structure
>>>> - * @mask: bitmap of queues to enable
>>>>     **/
>>>> -void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask)
>>>> +void iavf_irq_enable_queues(struct iavf_adapter *adapter)
>>>>    {
>>>>    	struct iavf_hw *hw = &adapter->hw;
>>>>    	int i;
>>>>    	for (i = 1; i < adapter->num_msix_vectors; i++) {
>>>> -		if (mask & BIT(i - 1)) {
>>>> -			wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
>>>> -			     IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
>>>> -			     IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
>>>> -		}
>>>> +		wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
>>>> +		     IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
>>>> +		     IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
>>>>    	}
>>>>    }
>>>> @@ -387,7 +384,7 @@ void iavf_irq_enable(struct iavf_adapter *adapter, bool flush)
>>>>    	struct iavf_hw *hw = &adapter->hw;
>>>>    	iavf_misc_irq_enable(adapter);
>>>> -	iavf_irq_enable_queues(adapter, ~0);
>>>> +	iavf_irq_enable_queues(adapter);
>>>>    	if (flush)
>>>>    		iavf_flush(hw);
>>>> diff --git a/drivers/net/ethernet/intel/iavf/iavf_register.h b/drivers/net/ethernet/intel/iavf/iavf_register.h
>>>> index bf793332fc9d..a19e88898a0b 100644
>>>> --- a/drivers/net/ethernet/intel/iavf/iavf_register.h
>>>> +++ b/drivers/net/ethernet/intel/iavf/iavf_register.h
>>>> @@ -40,7 +40,7 @@
>>>>    #define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT)
>>>>    #define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
>>>>    #define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
>>>> -#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
>>> so this was wrong even before as not indicating 31 as max?
>> Correct, but again no issues.
>>
>> Given that, should I re-send to net ?
> probably with older kernels PFs would still be allocating <= 16 irqs,
> right? not sure if one could take a PF and hack it to request for more
> than 32 irqs and then hit the wall with the mask you're removing.
>
Unlikely since the VF currently never requests more than 16 queues, so 
any IRQs > 16 are useless.

The "fix" is needed for another patch that will enable up to 256 queues 
though.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ