lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220219135359.ljmfnjo7xqn2h6ze@gmail.com>
Date:   Sat, 19 Feb 2022 13:53:59 +0000
From:   Martin Habets <habetsm.xilinx@...il.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     Íñigo Huguet <ihuguet@...hat.com>,
        Edward Cree <ecree.xilinx@...il.com>,
        "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH net-next 1/2] sfc: default config to 1 channel/core in
 local NUMA node only

On Fri, Feb 11, 2022 at 11:01:00AM -0800, Jakub Kicinski wrote:
> On Fri, 11 Feb 2022 12:05:19 +0100 Íñigo Huguet wrote:
> > Totally. My comment was intended to be more like a question to see why
> > we should or shouldn't consider NUMA nodes in
> > netif_get_num_default_rss_queues. But now I understand your point
> > better.
> > 
> > However, would it make sense something like this for
> > netif_get_num_default_rss_queues, or it would be a bit overkill?
> > if the system has more than one NUMA node, allocate one queue per
> > physical core in local NUMA node.
> > else, allocate physical cores / 2
> 
> I don't have a strong opinion on the NUMA question, to be honest.
> It gets complicated pretty quickly. If there is one NIC we may or 
> may not want to divide - for pure packet forwarding sure, best if
> its done on the node with the NIC, but that assumes the other node 
> is idle or doing something else? How does it not need networking?
> 
> If each node has a separate NIC we should definitely divide. But
> it's impossible to know the NIC count at the netdev level..
> 
> So my thinking was let's leave NUMA configurations to manual tuning.
> If we don't do anything special for NUMA it's less likely someone will
> tell us we did the wrong thing there :) But feel free to implement what
> you suggested above.
> 
> One thing I'm not sure of is if anyone uses the early AMD chiplet CPUs 
> in a NUMA-per-chiplet mode? IIRC they had a mode like that. And that'd
> potentially be problematic if we wanted to divide by number of nodes.
> Maybe not as much if just dividing by 2.

Since one week Xilinx is part of AMD. In time I'm sure we'll be able
to investigate AMD specifics.

Martin

> > Another thing: this patch series appears in patchwork with state
> > "Changes Requested", but no changes have been requested, actually. Can
> > the state be changed so it has more visibility to get reviews?
> 
> I think resend would be best.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ