lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQMxvzYqJkwNBYf0@mev-dev.igk.intel.com>
Date: Thu, 30 Oct 2025 10:37:03 +0100
From: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
To: Paul Menzel <pmenzel@...gen.mpg.de>
Cc: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>,
	intel-wired-lan@...ts.osuosl.org, netdev@...r.kernel.org,
	aleksander.lobakin@...el.com, przemyslaw.kitszel@...el.com,
	jacob.e.keller@...el.com,
	Aleksandr Loktionov <aleksandr.loktionov@...el.com>
Subject: Re: [PATCH iwl-next v3] ice: use netif_get_num_default_rss_queues()

On Thu, Oct 30, 2025 at 10:10:32AM +0100, Paul Menzel wrote:
> Dear Michal,
> 
> 
> Thank you for your patch. For the summary, I’d add:
> 
> ice: Use netif_get_num_default_rss_queues() to decrease queue number
> 
> Am 30.10.25 um 09:30 schrieb Michal Swiatkowski:
> > On some high-core systems (like AMD EPYC Bergamo, Intel Clearwater
> > Forest) loading ice driver with default values can lead to queue/irq
> > exhaustion. It will result in no additional resources for SR-IOV.
> 
> Could you please elaborate how to make the queue/irq exhaustion visible?
> 

What do you mean? On high core system, lets say num_online_cpus()
returns 288, on 8 ports card we have online 256 irqs per eqch PF (2k in
total). Driver will load with the 256 queues (and irqs) on each PF.
Any VFs creation command will fail due to no free irqs available.
(echo X > /sys/class/net/ethX/device/sriov_numvfs)

> > In most cases there is no performance reason for more than half
> > num_cpus(). Limit the default value to it using generic
> > netif_get_num_default_rss_queues().
> > 
> > Still, using ethtool the number of queues can be changed up to
> > num_online_cpus(). It can be done by calling:
> > $ethtool -L ethX combined $(nproc)
> > 
> > This change affects only the default queue amount.
> 
> How would you judge the regression potential, that means for people where
> the defaults work good enough, and the queue number is reduced now?
>

You can take a look into commit that introduce /2 change in
netif_get_num_default_rss_queues() [1]. There is a good justification
for such situation. In short, heaving physical core number is just a
wasting of CPU resources.

[1] https://lore.kernel.org/netdev/20220315091832.13873-1-ihuguet@redhat.com/

> 
> Kind regards,
> 
> Paul
> 
> 
> > Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@...el.com>
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
> > ---
> > v2 --> v3:
> >   * use $(nproc) in command example in commit message
> > 
> > v1 --> v2:
> >   * Follow Olek's comment and switch from custom limiting to the generic
> >     netif_...() function.
> >   * Add more info in commit message (Paul)
> >   * Dropping RB tags, as it is different patch now
> > ---
> >   drivers/net/ethernet/intel/ice/ice_irq.c |  5 +++--
> >   drivers/net/ethernet/intel/ice/ice_lib.c | 12 ++++++++----
> >   2 files changed, 11 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c b/drivers/net/ethernet/intel/ice/ice_irq.c
> > index 30801fd375f0..1d9b2d646474 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_irq.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_irq.c
> > @@ -106,9 +106,10 @@ static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf,
> >   #define ICE_RDMA_AEQ_MSIX 1
> >   static int ice_get_default_msix_amount(struct ice_pf *pf)
> >   {
> > -	return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() +
> > +	return ICE_MIN_LAN_OICR_MSIX + netif_get_num_default_rss_queues() +
> >   	       (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX : 0) +
> > -	       (ice_is_rdma_ena(pf) ? num_online_cpus() + ICE_RDMA_AEQ_MSIX : 0);
> > +	       (ice_is_rdma_ena(pf) ? netif_get_num_default_rss_queues() +
> > +				      ICE_RDMA_AEQ_MSIX : 0);
> >   }
> >   /**
> > diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
> > index bac481e8140d..e366d089bef9 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> > @@ -159,12 +159,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi)
> >   static u16 ice_get_rxq_count(struct ice_pf *pf)
> >   {
> > -	return min(ice_get_avail_rxq_count(pf), num_online_cpus());
> > +	return min(ice_get_avail_rxq_count(pf),
> > +		   netif_get_num_default_rss_queues());
> >   }
> >   static u16 ice_get_txq_count(struct ice_pf *pf)
> >   {
> > -	return min(ice_get_avail_txq_count(pf), num_online_cpus());
> > +	return min(ice_get_avail_txq_count(pf),
> > +		   netif_get_num_default_rss_queues());
> >   }
> >   /**
> > @@ -907,13 +909,15 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi)
> >   		if (vsi->type == ICE_VSI_CHNL)
> >   			vsi->rss_size = min_t(u16, vsi->num_rxq, max_rss_size);
> >   		else
> > -			vsi->rss_size = min_t(u16, num_online_cpus(),
> > +			vsi->rss_size = min_t(u16,
> > +					      netif_get_num_default_rss_queues(),
> >   					      max_rss_size);
> >   		vsi->rss_lut_type = ICE_LUT_PF;
> >   		break;
> >   	case ICE_VSI_SF:
> >   		vsi->rss_table_size = ICE_LUT_VSI_SIZE;
> > -		vsi->rss_size = min_t(u16, num_online_cpus(), max_rss_size);
> > +		vsi->rss_size = min_t(u16, netif_get_num_default_rss_queues(),
> > +				      max_rss_size);
> >   		vsi->rss_lut_type = ICE_LUT_VSI;
> >   		break;
> >   	case ICE_VSI_VF:
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ