lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <BYAPR21MB1270B8B3B70465B7B355EF86BFC59@BYAPR21MB1270.namprd21.prod.outlook.com>
Date:   Tue, 24 Aug 2021 01:38:44 +0000
From:   Dexuan Cui <decui@...rosoft.com>
To:     Haiyang Zhang <haiyangz@...rosoft.com>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:     KY Srinivasan <kys@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        Paul Rosswurm <paulros@...rosoft.com>,
        Shachar Raindel <shacharr@...rosoft.com>,
        "olaf@...fle.de" <olaf@...fle.de>, vkuznets <vkuznets@...hat.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH net-next] mana: Add support for EQ sharing

> From: Haiyang Zhang <haiyangz@...rosoft.com>
> Sent: Saturday, August 21, 2021 2:18 PM
> To: Dexuan Cui <decui@...rosoft.com>; linux-hyperv@...r.kernel.org;
> >
> > 4) support up to 64 queues per net interface (it was 16). It looks like
> > the default number of queues is also 64 if the VM has >=64 CPUs? --
> > should we add a new field apc->default_queues and limit it to 16 or 32?
> > We'd like to make sure typically the best performance can be achieved
> > with the default number of queues.
> I found on a 40 cpu VM, the mana_query_vport_cfg() returns max_txq:32,
> max_rxq:32, so I didn't further reduce the number (32) from PF.
> 
> That's also the opinion from the host team -- if they upgrade the NIC
> HW in the future, they can adjust the setting from PF side without
> requiring VF driver change.

Ah, I forgot this. Thanks for the explanation!  

> > 5) If the VM has >=64 CPUs, with the patch we create 1 HWC EQ and 64 NIC
> > EQs, and IMO the creation of the last NIC EQ fails since now the host PF
> > driver allows only 64 MSI-X interrupts? If this is the case, I think
> > mana_probe() -> mana_create_eq() fails and no net interface will be
> > created. It looks like we should create up to 63 NIC EQs in this case,
> > and make sure we don't create too many SQs/RQs accordingly.
> >
> > At the end of mana_gd_query_max_resources(), should we add something
> > like:
> > 	if (gc->max_num_queues >= gc->num_msix_usable -1)
> > 		gc->max_num_queues = gc->num_msix_usable -1;
> As said, the PF allows 32 queues, and 64 MSI-X interrupts for now.
> The PF should increase the MSI-X limit if the #queues is increased to
> 64+.

Makes sense. My description was a false alarm.
 
> But for robustness, I like your idea that add a check in VF like above.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ