lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZzWTMwo7hx8qRLnt@mev-dev.igk.intel.com>
Date: Thu, 14 Nov 2024 07:05:39 +0100
From: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
To: Lukasz Czapnik <lukasz.czapnik@...el.com>
Cc: intel-wired-lan@...ts.osuosl.org, pmenzel@...gen.mpg.de,
	wojciech.drewek@...el.com, marcin.szycik@...el.com,
	netdev@...r.kernel.org, konrad.knitter@...el.com,
	pawel.chmielewski@...el.com, horms@...nel.org,
	David.Laight@...lab.com, nex.sw.ncis.nat.hpm.dev@...el.com,
	pio.raczynski@...il.com, sridhar.samudrala@...el.com,
	jacob.e.keller@...el.com, jiri@...nulli.us,
	przemyslaw.kitszel@...el.com
Subject: Re: [Intel-wired-lan] [iwl-next v7 5/9] ice, irdma: move interrupts
 code to irdma

On Wed, Nov 13, 2024 at 05:21:20PM +0100, Lukasz Czapnik wrote:
> 
> 
> On 11/4/2024 1:13 PM, Michal Swiatkowski wrote:
> > Move responsibility of MSI-X requesting for RDMA feature from ice driver
> > to irdma driver. It is done to allow simple fallback when there is not
> > enough MSI-X available.
> > 
> > Change amount of MSI-X used for control from 4 to 1, as it isn't needed
> > to have more than one MSI-X for this purpose.
> > 
> > Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
> > ---
> >   drivers/infiniband/hw/irdma/hw.c         |  2 -
> >   drivers/infiniband/hw/irdma/main.c       | 46 ++++++++++++++++-
> >   drivers/infiniband/hw/irdma/main.h       |  3 ++
> >   drivers/net/ethernet/intel/ice/ice.h     |  1 -
> >   drivers/net/ethernet/intel/ice/ice_idc.c | 64 ++++++------------------
> >   drivers/net/ethernet/intel/ice/ice_irq.c |  3 +-
> >   include/linux/net/intel/iidc.h           |  2 +
> >   7 files changed, 65 insertions(+), 56 deletions(-)
> > 
> > diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
> > index ad50b77282f8..69ce1862eabe 100644
> > --- a/drivers/infiniband/hw/irdma/hw.c
> > +++ b/drivers/infiniband/hw/irdma/hw.c
> > @@ -498,8 +498,6 @@ static int irdma_save_msix_info(struct irdma_pci_f *rf)
> >   	iw_qvlist->num_vectors = rf->msix_count;
> >   	if (rf->msix_count <= num_online_cpus())
> >   		rf->msix_shared = true;
> > -	else if (rf->msix_count > num_online_cpus() + 1)
> > -		rf->msix_count = num_online_cpus() + 1;
> >   	pmsix = rf->msix_entries;
> >   	for (i = 0, ceq_idx = 0; i < rf->msix_count; i++, iw_qvinfo++) {
> > diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c
> > index 3f13200ff71b..1ee8969595d3 100644
> > --- a/drivers/infiniband/hw/irdma/main.c
> > +++ b/drivers/infiniband/hw/irdma/main.c
> > @@ -206,6 +206,43 @@ static void irdma_lan_unregister_qset(struct irdma_sc_vsi *vsi,
> >   		ibdev_dbg(&iwdev->ibdev, "WS: LAN free_res for rdma qset failed.\n");
> >   }
> > +static int irdma_init_interrupts(struct irdma_pci_f *rf, struct ice_pf *pf)
> > +{
> > +	int i;
> > +
> > +	rf->msix_count = num_online_cpus() + IRDMA_NUM_AEQ_MSIX;
> 
> I think we can default RDMA MSI-X to 64 instead of num_online_cpus(). It
> would play better on platforms with high core count (200+ cores). There
> are very little benefits for having more than 64 queues.
> 

Sure,I can do that. Do we have some numbers to put it into commit
message?

> In those special cases, when more queues are needed, user should be able
> to manually assign more resources to RDMA.

Do we have a way to do that? I mean, currently AFAIK this is the only place
where RDMA is requesting MSI-X from ice. Driver can be reloaded to do it
again (if didn't receive enough MSI-X and user change other config to
free it for RDMA use case), but the max value is fixed here (to
num_online_cpus() here, and to 64 after your suggestion).

RDMA driver should be able to reinit MSI-X during working exactly the
same way as eth is changing MSI-X amount when queues number is changing.
This should be done in irdma driver. Hope someone will take care of that
(if this is really needed, becase if 64 is always enough we are fine).

In summary I will add:
#define IRDMA_NUM_OPTIMAL_MSIX 64
min(IRDMA_NUM_OPTIMAL_MSIX, num_online_cpus()) + IRDMA_NUM_AEQ_MSIX;

Thanks,
Michal

> 
> Regards,
> Lukasz
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ