lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <34747C51-2F94-4B64-959B-BA4B0AA4224B@redhat.com>
Date: Fri, 29 Sep 2023 10:38:59 +0200
From: Eelco Chaudron <echaudro@...hat.com>
To: Nicholas Piggin <npiggin@...il.com>
Cc: Aaron Conole <aconole@...hat.com>, netdev@...r.kernel.org,
 dev@...nvswitch.org, Ilya Maximets <imaximet@...hat.com>,
 Flavio Leitner <fbl@...hat.com>
Subject: Re: [ovs-dev] [RFC PATCH 4/7] net: openvswitch: ovs_vport_receive
 reduce stack usage



On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:

> On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
>> Nicholas Piggin <npiggin@...il.com> writes:
>>
>>> Dynamically allocating the sw_flow_key reduces stack usage of
>>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
>>> another GFP_ATOMIC allocation in the receive path.
>>>
>>> XXX: is this a problem with memory reserves if ovs is in a
>>> memory reclaim path, or since we have a skb allocated, is it
>>> okay to use some GFP_ATOMIC reserves?
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@...il.com>
>>> ---
>>
>> This represents a fairly large performance hit.  Just my own quick
>> testing on a system using two netns, iperf3, and simple forwarding rules
>> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
>> is a simple case, and doesn't involve a more involved scenario like
>> multiple bridges, tunnels, and internal ports.  I suspect such cases
>> will see even bigger hit.
>>
>> I don't know the impact of the other changes, but just an FYI that the
>> performance impact of this change is extremely noticeable on x86
>> platform.
>
> Thanks for the numbers. This patch is probably the biggest perf cost,
> but unfortunately it's also about the biggest saving. I might have an
> idea to improve it.

Also, were you able to figure out why we do not see this problem on x86 and arm64? Is the stack usage so much larger, or is there some other root cause? Is there a simple replicator, as this might help you profile the differences between the architectures?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ