lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABWYdi2BGi=iRCfLhmQCqO=1eaQ1WaCG7F9WsJrz-7==ocZidg@mail.gmail.com>
Date: Fri, 14 Jul 2023 16:21:08 -0700
From: Ivan Babrou <ivan@...udflare.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Yan Zhai <yan@...udflare.com>, netdev@...r.kernel.org, 
	linux-kernel@...r.kernel.org, kernel-team@...udflare.com, 
	Eric Dumazet <edumazet@...gle.com>, "David S. Miller" <davem@...emloft.net>, 
	Paolo Abeni <pabeni@...hat.com>, Steven Rostedt <rostedt@...dmis.org>, 
	Masami Hiramatsu <mhiramat@...nel.org>, David Ahern <dsahern@...nel.org>
Subject: Re: [RFC PATCH net-next] tcp: add a tracepoint for tcp_listen_queue_drop

On Thu, Jul 13, 2023 at 8:14 PM Jakub Kicinski <kuba@...nel.org> wrote:
> > I don't know why kfree_skb is called so much. I also don't agree with
> > Yan that it's not actually too much, because it's a lot (especially
> > compared with near zero for my proposed tracepoint). I can easily see
> > 300-500k calls per second into it:
> >
> > $ perf stat -I 1000 -a -e skb:kfree_skb -- sleep 10
> > #           time             counts unit events
> >      1.000520165             10,108      skb:kfree_skb
> >      2.010494526             11,178      skb:kfree_skb
> >      3.075503743             10,770      skb:kfree_skb
> >      4.122814843             11,334      skb:kfree_skb
> >      5.128518432             12,020      skb:kfree_skb
> >      6.176504094             11,117      skb:kfree_skb
> >      7.201504214             12,753      skb:kfree_skb
> >      8.229523643             10,566      skb:kfree_skb
> >      9.326499044            365,239      skb:kfree_skb
> >     10.002106098            313,105      skb:kfree_skb
> > $ perf stat -I 1000 -a -e skb:kfree_skb -- sleep 10
> > #           time             counts unit events
> >      1.000767744             52,240      skb:kfree_skb
> >      2.069762695            508,310      skb:kfree_skb
> >      3.102763492            417,895      skb:kfree_skb
> >      4.142757608            385,981      skb:kfree_skb
> >      5.190759795            430,154      skb:kfree_skb
> >      6.243765384            405,707      skb:kfree_skb
> >      7.290818228            362,934      skb:kfree_skb
> >      8.297764298            336,702      skb:kfree_skb
> >      9.314287243            353,039      skb:kfree_skb
> >     10.002288423            251,414      skb:kfree_skb
> >
> > Most of it is NOT_SPECIFIED (1s data from one CPU during a spike):
> >
> > $ perf script | sed 's/.*skbaddr=//' | awk '{ print $NF }' | sort |
> > uniq -c | sort -n | tail
> >       1 TCP_CLOSE
> >       2 NO_SOCKET
> >       4 TCP_INVALID_SEQUENCE
> >       4 TCP_RESET
> >      13 TCP_OLD_DATA
> >      14 NETFILTER_DROP
> >    4594 NOT_SPECIFIED
> >
> > We can start a separate discussion to break it down by category if it
> > would help. Let me know what kind of information you would like us to
> > provide to help with that. I assume you're interested in kernel stacks
> > leading to kfree_skb with NOT_SPECIFIED reason, but maybe there's
> > something else.
>
> Just the stacks.

Here you go: https://lore.kernel.org/netdev/CABWYdi00L+O30Q=Zah28QwZ_5RU-xcxLFUK2Zj08A8MrLk9jzg@mail.gmail.com/

> > Even if I was only interested in one specific reason, I would still
> > have to arm the whole tracepoint and route a ton of skbs I'm not
> > interested in into my bpf code. This seems like a lot of overhead,
> > especially if I'm dropping some attack packets.
>
> That's what I meant with my drop vs exception comment. We already have
> two tracepoints on the skb free path (free and consume), adding another
> shouldn't rise too many eyebrows.

I'm a bit confused. Previously you said:

> Specifically what I'm wondering is whether we should also have
> a separation between policy / "firewall drops" and error / exception
> drops. Within the skb drop reason codes, I mean.

My understanding was that you proposed adding more SKB_DROP_REASON_*,
but now you seem to imply that we might want to add another
tracepoint. Could you clarify which path you have in mind?

We can add a few reasons that would satisfy my need by covering
whatever results into tcp_listendrop() calls today. The problem is:
unless we remove some other reasons from kfree_skb, adding more
reasons for firewall drops / exceptions wouldn't change the cost at
all. We'd still have the same number of calls into the tracepoint and
the condition to find "interesting" reasons would be the same:

if (reason == SKB_DROP_REASON_TCP_OVERFLOW_OR_SOMETHING)

It still seems very expensive to consume a firehose of kfree_skb just
to find some rare nuggets.

> > Perhaps a lot of extra NOT_SPECIFIED stuff can be fixed and removed
> > from kfree_skb. It's not something I can personally do as it requires
> > much deeper network code understanding than I possess. For TCP we'll
> > also have to add some extra reasons for kfree_skb, because currently
> > it's all NOT_SPECIFIED (no reason set in the accept path):
> >
> > * https://elixir.bootlin.com/linux/v6.5-rc1/source/net/ipv4/tcp_input.c#L6499
> > * https://elixir.bootlin.com/linux/v6.5-rc1/source/net/ipv4/tcp_ipv4.c#L1749
> >
> > For UDP we already have SKB_DROP_REASON_SOCKET_RCVBUFF, so I tried my
> > best to implement what I wanted based on that. It's not very
> > approachable, as you'd have to extract the destination port yourself
> > from the raw skb. As Yan said, for TCP people often rely on skb->sk,
> > which is just not present when the incoming SYN is dropped. I failed
> > to find a good example of extracting a destination port that I could
> > replicate. So far I have just a per-reason breakdown working:
> >
> > * https://github.com/cloudflare/ebpf_exporter/pull/233
> >
> > If you have an ebpf example that would help me extract the destination
> > port from an skb in kfree_skb, I'd be interested in taking a look and
> > trying to make it work.
> >
> > The need to extract the protocol level information in ebpf is only
> > making kfree_skb more expensive for the needs of catching rare cases
> > when we run out of buffer space (UDP) or listen queue (TCP). These two
> > cases are very common failure scenarios that people are interested in
> > catching with straightforward tracepoints that can give them the
> > needed information easily and cheaply.
> >
> > I sympathize with the desire to keep the number of tracepoints in
> > check, but I also feel like UDP buffer drops and TCP listen drops
> > tracepoints are very much justified to exist.
>
> I'm not completely opposed to the tracepoints where needed. It's more
> of trying to make sure we do due diligence on the existing solutions.
> Or maybe not even due diligence as much as pay off some technical debt.

The arguments for a new tracepoint from my due diligence:

1. There are too many calls into kfree_skb. Maybe that's fixable to a
degree, but there would still be a big discrepancy in the number of
calls between my proposed targeted tracepoint and skb:kfree_skb. For
some setups this might not be feasible.
2. It's hard to extract the necessary information from skb without an
sk attached. Both from usability perspective (you need to parse the
network header yourself with a non-trivial amount of code) and from
cost perspective (parsing is not zero cost). The latter doesn't matter
as much if we target a specific reason.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ