[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <CVZGUWQGYWQX.1W7BH28XB6WKM@wheely>
Date: Wed, 04 Oct 2023 17:11:56 +1000
From: "Nicholas Piggin" <npiggin@...il.com>
To: "Eelco Chaudron" <echaudro@...hat.com>
Cc: "Aaron Conole" <aconole@...hat.com>, <netdev@...r.kernel.org>,
<dev@...nvswitch.org>, "Ilya Maximets" <imaximet@...hat.com>, "Flavio
Leitner" <fbl@...hat.com>
Subject: Re: [ovs-dev] [RFC PATCH 4/7] net: openvswitch: ovs_vport_receive
reduce stack usage
On Fri Sep 29, 2023 at 6:38 PM AEST, Eelco Chaudron wrote:
>
>
> On 29 Sep 2023, at 9:00, Nicholas Piggin wrote:
>
> > On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
> >> Nicholas Piggin <npiggin@...il.com> writes:
> >>
> >>> Dynamically allocating the sw_flow_key reduces stack usage of
> >>> ovs_vport_receive from 544 bytes to 64 bytes at the cost of
> >>> another GFP_ATOMIC allocation in the receive path.
> >>>
> >>> XXX: is this a problem with memory reserves if ovs is in a
> >>> memory reclaim path, or since we have a skb allocated, is it
> >>> okay to use some GFP_ATOMIC reserves?
> >>>
> >>> Signed-off-by: Nicholas Piggin <npiggin@...il.com>
> >>> ---
> >>
> >> This represents a fairly large performance hit. Just my own quick
> >> testing on a system using two netns, iperf3, and simple forwarding rules
> >> shows between 2.5% and 4% performance reduction on x86-64. Note that it
> >> is a simple case, and doesn't involve a more involved scenario like
> >> multiple bridges, tunnels, and internal ports. I suspect such cases
> >> will see even bigger hit.
> >>
> >> I don't know the impact of the other changes, but just an FYI that the
> >> performance impact of this change is extremely noticeable on x86
> >> platform.
> >
> > Thanks for the numbers. This patch is probably the biggest perf cost,
> > but unfortunately it's also about the biggest saving. I might have an
> > idea to improve it.
>
> Also, were you able to figure out why we do not see this problem on
> x86 and arm64? Is the stack usage so much larger, or is there some
> other root cause?
Haven't pinpointed it exactly. ppc64le interrupt entry frame is nearly
3x larger than x86-64, about 200 bytes. So there's 400 if a hard
interrupt (not seen in the backtrace) is what overflowed it. Stack
alignment I think is 32 bytes vs 16 for x86-64. And different amount of
spilling and non-volatile register use and inlining choices by the
compiler could nudge things one way or another. There is little to no
ppc64le specific data structures on the stack in any of this call chain
which should cause much more bloat though, AFAIKS.
So other archs should not be far away from overflowing 16kB I think.
> Is there a simple replicator, as this might help you
> profile the differences between the architectures?
Unfortunately not, it's some kubernetes contraption I don't know how
to reproduce myself.
Thanks,
Nick
Powered by blists - more mailing lists