lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 7 Oct 2015 10:47:17 -0700
From:	Jarno Rajahalme <jrajahalme@...ira.com>
To:	Jesse Gross <jesse@...ira.com>
Cc:	Alexander Duyck <alexander.duyck@...il.com>,
	Vlastimil Babka <vbabka@...e.cz>,
	Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
	"dev@...nvswitch.org" <dev@...nvswitch.org>,
	Pravin Shelar <pshelar@...ira.com>,
	"David S. Miller" <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mm@...ck.org
Subject: Re: [ovs-dev] [PATCH] ovs: do not allocate memory from offline numa node


> On Oct 6, 2015, at 6:01 PM, Jesse Gross <jesse@...ira.com> wrote:
> 
> On Mon, Oct 5, 2015 at 1:25 PM, Alexander Duyck
> <alexander.duyck@...il.com> wrote:
>> On 10/05/2015 06:59 AM, Vlastimil Babka wrote:
>>> 
>>> On 10/02/2015 12:18 PM, Konstantin Khlebnikov wrote:
>>>> 
>>>> When openvswitch tries allocate memory from offline numa node 0:
>>>> stats = kmem_cache_alloc_node(flow_stats_cache, GFP_KERNEL | __GFP_ZERO,
>>>> 0)
>>>> It catches VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES || !node_online(nid))
>>>> [ replaced with VM_WARN_ON(!node_online(nid)) recently ] in linux/gfp.h
>>>> This patch disables numa affinity in this case.
>>>> 
>>>> Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
>>> 
>>> 
>>> ...
>>> 
>>>> diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c
>>>> index f2ea83ba4763..c7f74aab34b9 100644
>>>> --- a/net/openvswitch/flow_table.c
>>>> +++ b/net/openvswitch/flow_table.c
>>>> @@ -93,7 +93,8 @@ struct sw_flow *ovs_flow_alloc(void)
>>>> 
>>>>      /* Initialize the default stat node. */
>>>>      stats = kmem_cache_alloc_node(flow_stats_cache,
>>>> -                      GFP_KERNEL | __GFP_ZERO, 0);
>>>> +                      GFP_KERNEL | __GFP_ZERO,
>>>> +                      node_online(0) ? 0 : NUMA_NO_NODE);
>>> 
>>> 
>>> Stupid question: can node 0 become offline between this check, and the
>>> VM_WARN_ON? :) BTW what kind of system has node 0 offline?
>> 
>> 
>> Another question to ask would be is it possible for node 0 to be online, but
>> be a memoryless node?
>> 
>> I would say you are better off just making this call kmem_cache_alloc.  I
>> don't see anything that indicates the memory has to come from node 0, so
>> adding the extra overhead doesn't provide any value.
> 
> I agree that this at least makes me wonder, though I actually have
> concerns in the opposite direction - I see assumptions about this
> being on node 0 in net/openvswitch/flow.c.
> 
> Jarno, since you original wrote this code, can you take a look to see
> if everything still makes sense?

We keep the pre-allocated stats node at array index 0, which is initially used by all CPUs, but if CPUs from multiple numa nodes start updating the stats, we allocate additional stats nodes (up to one per numa node), and the CPUs on node 0 keep using the preallocated entry. If stats cannot be allocated from CPUs local node, then those CPUs keep using the entry at index 0. Currently the code in net/openvswitch/flow.c will try to allocate the local memory repeatedly, which may not be optimal when there is no memory at the local node.

Allocating the memory for the index 0 from other than node 0, as discussed here, just means that the CPUs on node 0 will keep on using non-local memory for stats. In a scenario where there are CPUs on two nodes (0, 1), but only the node 1 has memory, a shared flow entry will still end up having separate memory allocated for both nodes, but both of the nodes would be at node 1. However, there is still a high likelihood that the memory allocations would not share a cache line, which should prevent the nodes from invalidating each other’s caches. Based on this I do not see a problem relaxing the memory allocation for the default stats node. If node 0 has memory, however, it would be better to allocate the memory from node 0.

  Jarno

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ