[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5eca295e-1675-4779-b0d6-ec8a7550516f@intel.com>
Date: Wed, 13 Nov 2024 17:21:20 +0100
From: Lukasz Czapnik <lukasz.czapnik@...el.com>
To: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>,
<intel-wired-lan@...ts.osuosl.org>
CC: <pmenzel@...gen.mpg.de>, <wojciech.drewek@...el.com>,
<marcin.szycik@...el.com>, <netdev@...r.kernel.org>,
<konrad.knitter@...el.com>, <pawel.chmielewski@...el.com>,
<horms@...nel.org>, <David.Laight@...LAB.COM>,
<nex.sw.ncis.nat.hpm.dev@...el.com>, <pio.raczynski@...il.com>,
<sridhar.samudrala@...el.com>, <jacob.e.keller@...el.com>,
<jiri@...nulli.us>, <przemyslaw.kitszel@...el.com>
Subject: Re: [Intel-wired-lan] [iwl-next v7 5/9] ice, irdma: move interrupts
code to irdma
On 11/4/2024 1:13 PM, Michal Swiatkowski wrote:
> Move responsibility of MSI-X requesting for RDMA feature from ice driver
> to irdma driver. It is done to allow simple fallback when there is not
> enough MSI-X available.
>
> Change amount of MSI-X used for control from 4 to 1, as it isn't needed
> to have more than one MSI-X for this purpose.
>
> Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
> ---
> drivers/infiniband/hw/irdma/hw.c | 2 -
> drivers/infiniband/hw/irdma/main.c | 46 ++++++++++++++++-
> drivers/infiniband/hw/irdma/main.h | 3 ++
> drivers/net/ethernet/intel/ice/ice.h | 1 -
> drivers/net/ethernet/intel/ice/ice_idc.c | 64 ++++++------------------
> drivers/net/ethernet/intel/ice/ice_irq.c | 3 +-
> include/linux/net/intel/iidc.h | 2 +
> 7 files changed, 65 insertions(+), 56 deletions(-)
>
> diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
> index ad50b77282f8..69ce1862eabe 100644
> --- a/drivers/infiniband/hw/irdma/hw.c
> +++ b/drivers/infiniband/hw/irdma/hw.c
> @@ -498,8 +498,6 @@ static int irdma_save_msix_info(struct irdma_pci_f *rf)
> iw_qvlist->num_vectors = rf->msix_count;
> if (rf->msix_count <= num_online_cpus())
> rf->msix_shared = true;
> - else if (rf->msix_count > num_online_cpus() + 1)
> - rf->msix_count = num_online_cpus() + 1;
>
> pmsix = rf->msix_entries;
> for (i = 0, ceq_idx = 0; i < rf->msix_count; i++, iw_qvinfo++) {
> diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c
> index 3f13200ff71b..1ee8969595d3 100644
> --- a/drivers/infiniband/hw/irdma/main.c
> +++ b/drivers/infiniband/hw/irdma/main.c
> @@ -206,6 +206,43 @@ static void irdma_lan_unregister_qset(struct irdma_sc_vsi *vsi,
> ibdev_dbg(&iwdev->ibdev, "WS: LAN free_res for rdma qset failed.\n");
> }
>
> +static int irdma_init_interrupts(struct irdma_pci_f *rf, struct ice_pf *pf)
> +{
> + int i;
> +
> + rf->msix_count = num_online_cpus() + IRDMA_NUM_AEQ_MSIX;
I think we can default RDMA MSI-X to 64 instead of num_online_cpus(). It
would play better on platforms with high core count (200+ cores). There
are very little benefits for having more than 64 queues.
In those special cases, when more queues are needed, user should be able
to manually assign more resources to RDMA.
Regards,
Lukasz
Powered by blists - more mailing lists