[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <CW62DEF1LEWB.3KK4CQJNGIRYO@wheely>
Date: Thu, 12 Oct 2023 11:19:28 +1000
From: "Nicholas Piggin" <npiggin@...il.com>
To: "Aaron Conole" <aconole@...hat.com>
Cc: <netdev@...r.kernel.org>, <dev@...nvswitch.org>, "Pravin B Shelar"
<pshelar@....org>, "Eelco Chaudron" <echaudro@...hat.com>, "Ilya Maximets"
<imaximet@...hat.com>, "Flavio Leitner" <fbl@...hat.com>, "Paolo Abeni"
<pabeni@...hat.com>, "Jakub Kicinski" <kuba@...nel.org>, "David S. Miller"
<davem@...emloft.net>, "Eric Dumazet" <edumazet@...gle.com>
Subject: Re: [PATCH 0/7] net: openvswitch: Reduce stack usage
On Wed Oct 11, 2023 at 11:23 PM AEST, Aaron Conole wrote:
> Nicholas Piggin <npiggin@...il.com> writes:
>
> > Hi,
> >
> > I'll post this out again to keep discussion going. Thanks all for the
> > testing and comments so far.
>
> Thanks for the update - did you mean for this to be tagged RFC as well?
Yeah, it wasn't intended for merge with no RB or tests of course.
I intended to tag it RFC v2.
>
> I don't see any performance data with the deployments on x86_64 and
> ppc64le that cause these stack overflows. Are you able to provide the
> impact on ppc64le and x86_64?
Don't think it'll be easy but they are not be pushing such rates
so it wouldn't say much. If you want to show the worst case, those
tput and latency microbenchmarks should do it.
It's the same tradeoff and reasons the per-cpu key allocator was
added in the first place, presumably. Probably more expensive than
stack, but similar order of magnitude O(cycles) vs slab which is
probably O(100s cycles).
> I guess the change probably should be tagged as -next since it doesn't
> really have a specific set of commits it is "fixing." It's really like
> a major change and shouldn't really go through stable trees, but I'll
> let the maintainers tell me off if I got it wrong.
It should go upstream first if anything. I thought it was relatively
simple and elegant to reuse the per-cpu key allocator though :(
It is a kernel crash, so we need something for stable. But In a case
like this there's not one single problem. Linux kernel stack use has
always been pretty dumb - "don't use too much", for some values of
too much, and just cross fingers config and compiler and worlkoad
doesn't hit some overflow case.
And powerpc has always used more stack x86, so probably it should stay
one power-of-two larger to be safe. And that may be the best fix for
-stable.
But also, ovs uses too much stack. Look at the stack sizes in the first
RFC patch, and ovs takes the 5 largest. That's because it has always
been the practice to not put large variables on stack, and when you're
introducing significant recursion that puts extra onus on you to be
lean. Even if it costs a percent. There are probably lots of places in
the kernel that could get a few cycles by sticking large structures on
stack, but unfortunately we can't all do that.
Thanks,
Nick
Powered by blists - more mailing lists