[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e64291ac-98e0-894f-12cb-d01347aef36c@kernel.org>
Date: Fri, 14 Jul 2023 18:54:14 -0600
From: David Ahern <dsahern@...nel.org>
To: Ivan Babrou <ivan@...udflare.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Cc: kernel-team <kernel-team@...udflare.com>,
Eric Dumazet <edumazet@...gle.com>, "David S. Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>, Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>, Jakub Kicinski <kuba@...nel.org>
Subject: Re: Stacks leading into skb:kfree_skb
On 7/14/23 4:13 PM, Ivan Babrou wrote:
> As requested by Jakub Kicinski and David Ahern here:
>
> * https://lore.kernel.org/netdev/20230713201427.2c50fc7b@kernel.org/
>
> I made some aggregations for the stacks we see leading into
> skb:kfree_skb endpoint. There's a lot of data that is not easily
> digestible, so I lightly massaged the data and added flamegraphs in
> addition to raw stack counts. Here's the gist link:
>
> * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290
I see a lot of packet_rcv as the tip before kfree_skb. How many packet
sockets do you have running on that box? Can you accumulate the total
packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining
stacktrace?
>
> Let me know if any other format works better for you. I have perf
> script output stashed just in case.
I was expecting more like perf report which should consolidate the
similar stack traces, but the flamegraph worked.
>
> As a reminder (also mentioned in the gist), we're on v6.1, which is
> the latest LTS.
>
> I can't explain the reasons for all the network paths we have, but our
> kernel / network people are CC'd if you have any questions.
Powered by blists - more mailing lists