lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <f7ty1g9cmf6.fsf@redhat.com> Date: Wed, 11 Oct 2023 09:34:53 -0400 From: Aaron Conole <aconole@...hat.com> To: "Nicholas Piggin" <npiggin@...il.com> Cc: "Eelco Chaudron" <echaudro@...hat.com>, <netdev@...r.kernel.org>, <dev@...nvswitch.org>, "Ilya Maximets" <imaximet@...hat.com>, "Flavio Leitner" <fbl@...hat.com> Subject: Re: [ovs-dev] [RFC PATCH 4/7] net: openvswitch: ovs_vport_receive reduce stack usage "Nicholas Piggin" <npiggin@...il.com> writes: > On Fri Sep 29, 2023 at 6:38 PM AEST, Eelco Chaudron wrote: >> >> >> On 29 Sep 2023, at 9:00, Nicholas Piggin wrote: >> >> > On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote: >> >> Nicholas Piggin <npiggin@...il.com> writes: >> >> >> >>> Dynamically allocating the sw_flow_key reduces stack usage of >> >>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of >> >>> another GFP_ATOMIC allocation in the receive path. >> >>> >> >>> XXX: is this a problem with memory reserves if ovs is in a >> >>> memory reclaim path, or since we have a skb allocated, is it >> >>> okay to use some GFP_ATOMIC reserves? >> >>> >> >>> Signed-off-by: Nicholas Piggin <npiggin@...il.com> >> >>> --- >> >> >> >> This represents a fairly large performance hit. Just my own quick >> >> testing on a system using two netns, iperf3, and simple forwarding rules >> >> shows between 2.5% and 4% performance reduction on x86-64. Note that it >> >> is a simple case, and doesn't involve a more involved scenario like >> >> multiple bridges, tunnels, and internal ports. I suspect such cases >> >> will see even bigger hit. >> >> >> >> I don't know the impact of the other changes, but just an FYI that the >> >> performance impact of this change is extremely noticeable on x86 >> >> platform. >> > >> > Thanks for the numbers. This patch is probably the biggest perf cost, >> > but unfortunately it's also about the biggest saving. I might have an >> > idea to improve it. >> >> Also, were you able to figure out why we do not see this problem on >> x86 and arm64? Is the stack usage so much larger, or is there some >> other root cause? Is there a simple replicator, as this might help >> you profile the differences between the architectures? > > I found some snippets of equivalent call chain (this is for 4.18 RHEL8 > kernels, but it's just to give a general idea of stack overhead > differences in C code). Frame size annotated on the right hand side: > > [c0000007ffdba980] do_execute_actions 496 > [c0000007ffdbab70] ovs_execute_actions 128 > [c0000007ffdbabf0] ovs_dp_process_packet 208 > [c0000007ffdbacc0] clone_execute 176 > [c0000007ffdbad70] do_execute_actions 496 > [c0000007ffdbaf60] ovs_execute_actions 128 > [c0000007ffdbafe0] ovs_dp_process_packet 208 > [c0000007ffdbb0b0] ovs_vport_receive 528 > [c0000007ffdbb2c0] internal_dev_xmit > total = 2368 > [ff49b6d4065a3628] do_execute_actions 416 > [ff49b6d4065a37c8] ovs_execute_actions 48 > [ff49b6d4065a37f8] ovs_dp_process_packet 112 > [ff49b6d4065a3868] clone_execute 64 > [ff49b6d4065a38a8] do_execute_actions 416 > [ff49b6d4065a3a48] ovs_execute_actions 48 > [ff49b6d4065a3a78] ovs_dp_process_packet 112 > [ff49b6d4065a3ae8] ovs_vport_receive 496 > [ff49b6d4065a3cd8] netdev_frame_hook > total = 1712 > > That's more significant than I thought, nearly 40% more stack usage for > ppc even with 3 frames having large local variables that can't be > avoided for either arch. > > So, x86_64 could be quite safe with its 16kB stack for the same > workload, explaining why same overflow has not been seen there. This is interesting - is it possible that we could resolve this without needing to change the kernel - or at least without changing how OVS works? Why are these so different? Maybe there's some bloat in some of the ppc data structures that can be addressed? For example, ovs_execute_actions shouldn't really be that different, but I wonder if the way the per-cpu infra works, or the deferred action processing gets inlined would be causing stack bloat? > Thanks, > Nick
Powered by blists - more mailing lists