lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 7 Feb 2022 16:03:01 +0100
From:   Íñigo Huguet <ihuguet@...hat.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     Edward Cree <ecree.xilinx@...il.com>, habetsm.xilinx@...il.com,
        "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH net-next 1/2] sfc: default config to 1 channel/core in
 local NUMA node only

On Fri, Jan 28, 2022 at 11:27 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Fri, 28 Jan 2022 16:19:21 +0100 Íñigo Huguet wrote:
> > Handling channels from CPUs in different NUMA node can penalize
> > performance, so better configure only one channel per core in the same
> > NUMA node than the NIC, and not per each core in the system.
> >
> > Fallback to all other online cores if there are not online CPUs in local
> > NUMA node.
>
> I think we should make netif_get_num_default_rss_queues() do a similar
> thing. Instead of min(8, num_online_cpus()) we should default to
> num_cores / 2 (that's physical cores, not threads). From what I've seen
> this appears to strike a good balance between wasting resources on
> pointless queues per hyperthread, and scaling up for CPUs which have
> many wimpy cores.
>

I have a few busy weeks coming, but I can do this after that.

With num_cores / 2 you divide by 2 because you're assuming 2 NUMA
nodes, or just the plain number 2?


-- 
Íñigo Huguet

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ