[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <IA3PR11MB898627B7BCB9ACEEE31A377BE5FDA@IA3PR11MB8986.namprd11.prod.outlook.com>
Date: Tue, 28 Oct 2025 07:48:11 +0000
From: "Loktionov, Aleksandr" <aleksandr.loktionov@...el.com>
To: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "pmenzel@...gen.mpg.de"
<pmenzel@...gen.mpg.de>, "Lobakin, Aleksander"
<aleksander.lobakin@...el.com>, "Kitszel, Przemyslaw"
<przemyslaw.kitszel@...el.com>, "Keller, Jacob E" <jacob.e.keller@...el.com>
Subject: RE: [Intel-wired-lan] [PATCH iwl-next v2] ice: use
netif_get_num_default_rss_queues()
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@...osl.org> On Behalf
> Of Michal Swiatkowski
> Sent: Tuesday, October 28, 2025 8:07 AM
> To: intel-wired-lan@...ts.osuosl.org
> Cc: netdev@...r.kernel.org; pmenzel@...gen.mpg.de; Lobakin, Aleksander
> <aleksander.lobakin@...el.com>; Kitszel, Przemyslaw
> <przemyslaw.kitszel@...el.com>; Keller, Jacob E
> <jacob.e.keller@...el.com>; Michal Swiatkowski
> <michal.swiatkowski@...ux.intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-next v2] ice: use
> netif_get_num_default_rss_queues()
>
> On some high-core systems (like AMD EPYC Bergamo, Intel Clearwater
> Forest) loading ice driver with default values can lead to queue/irq
> exhaustion. It will result in no additional resources for SR-IOV.
>
> In most cases there is no performance reason for more than half
> num_cpus(). Limit the default value to it using generic
> netif_get_num_default_rss_queues().
>
> Still, using ethtool the number of queues can be changed up to
> num_online_cpus(). It can be done by calling:
> $ethtool -L ethX combined max_cpu
>
It could be nice to use $(nproc)?
$ ethtool -L ethX combined $(nproc)
> This change affects only the default queue amount.
>
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
> ---
> v1 --> v2:
> * Follow Olek's comment and switch from custom limiting to the
> generic
> netif_...() function.
> * Add more info in commit message (Paul)
> * Dropping RB tags, as it is different patch now
> ---
> drivers/net/ethernet/intel/ice/ice_irq.c | 5 +++--
> drivers/net/ethernet/intel/ice/ice_lib.c | 12 ++++++++----
> 2 files changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c
> b/drivers/net/ethernet/intel/ice/ice_irq.c
> index 30801fd375f0..1d9b2d646474 100644
> --- a/drivers/net/ethernet/intel/ice/ice_irq.c
> +++ b/drivers/net/ethernet/intel/ice/ice_irq.c
> @@ -106,9 +106,10 @@ static struct ice_irq_entry
> *ice_get_irq_res(struct ice_pf *pf,
> #define ICE_RDMA_AEQ_MSIX 1
> static int ice_get_default_msix_amount(struct ice_pf *pf)
> {
> - return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() +
> + return ICE_MIN_LAN_OICR_MSIX +
> netif_get_num_default_rss_queues() +
> (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX :
> 0) +
> - (ice_is_rdma_ena(pf) ? num_online_cpus() +
> ICE_RDMA_AEQ_MSIX : 0);
> + (ice_is_rdma_ena(pf) ?
> netif_get_num_default_rss_queues() +
> + ICE_RDMA_AEQ_MSIX : 0);
> }
>
> /**
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c
> b/drivers/net/ethernet/intel/ice/ice_lib.c
> index bac481e8140d..e366d089bef9 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -159,12 +159,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi
> *vsi)
>
> static u16 ice_get_rxq_count(struct ice_pf *pf)
> {
> - return min(ice_get_avail_rxq_count(pf), num_online_cpus());
> + return min(ice_get_avail_rxq_count(pf),
> + netif_get_num_default_rss_queues());
> }
min(a, b) resolves to the type of the expression, which here will be int due to netif_get_num_default_rss_queues() returning int.
That implicitly truncates to u16 on return.
What do you think about to make this explicit with min_t() to avoid type surprises?
>
> static u16 ice_get_txq_count(struct ice_pf *pf)
> {
> - return min(ice_get_avail_txq_count(pf), num_online_cpus());
> + return min(ice_get_avail_txq_count(pf),
> + netif_get_num_default_rss_queues());
> }
Same min_t() ?
Otherwise, fine for me.
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@...el.com>
Powered by blists - more mailing lists