[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230718153631.7a08a6ec@kernel.org>
Date: Tue, 18 Jul 2023 15:36:31 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: David Ahern <dsahern@...nel.org>, Ivan Babrou <ivan@...udflare.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>, kernel-team
<kernel-team@...udflare.com>, Eric Dumazet <edumazet@...gle.com>, "David S.
Miller" <davem@...emloft.net>, Paolo Abeni <pabeni@...hat.com>, Steven
Rostedt <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>
Subject: Re: Stacks leading into skb:kfree_skb
On Fri, 14 Jul 2023 18:54:14 -0600 David Ahern wrote:
> > I made some aggregations for the stacks we see leading into
> > skb:kfree_skb endpoint. There's a lot of data that is not easily
> > digestible, so I lightly massaged the data and added flamegraphs in
> > addition to raw stack counts. Here's the gist link:
> >
> > * https://gist.github.com/bobrik/0e57671c732d9b13ac49fed85a2b2290
>
> I see a lot of packet_rcv as the tip before kfree_skb. How many packet
> sockets do you have running on that box? Can you accumulate the total
> packet_rcv -> kfree_skb_reasons into 1 count -- regardless of remaining
> stacktrace?
On a quick look we have 3 branches which can get us to kfree_skb from
packet_rcv:
if (skb->pkt_type == PACKET_LOOPBACK)
goto drop;
...
if (!net_eq(dev_net(dev), sock_net(sk)))
goto drop;
...
res = run_filter(skb, sk, snaplen);
if (!res)
goto drop_n_restore;
I'd guess is the last one? Which we should mark with the SOCKET_FILTER
drop reason?
Powered by blists - more mailing lists