[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cc4712f7-c723-89fc-dc9c-c8db3ff8c760@gmail.com>
Date: Mon, 27 Feb 2023 22:58:47 +0000
From: Edward Cree <ecree.xilinx@...il.com>
To: Daniel Xu <dxu@...uu.xyz>
Cc: bpf@...r.kernel.org, linux-kselftest@...r.kernel.org,
netdev@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 0/8] Support defragmenting IPv(4|6) packets in
BPF
On 27/02/2023 22:04, Daniel Xu wrote:
> I don't believe full L4 headers are required in the first fragment.
> Sufficiently sneaky attackers can, I think, send a byte at a time to
> subvert your proposed algorithm. Storing skb data seems inevitable here.
> Someone can correct me if I'm wrong here.
My thinking was that legitimate traffic would never do this and thus if
your first fragment doesn't have enough data to make a determination
then you just DROP the packet.
> What I find valuable about this patch series is that we can
> leverage the well understood and battle hardened kernel facilities. So
> avoid all the correctness and security issues that the kernel has spent
> 20+ years fixing.
I can certainly see the argument here. I guess it's a question of are
you more worried about the DoS from tricking the validator into thinking
good fragments are bad (the reverse is irrelevant because if you can
trick a validator into thinking your bad fragment belongs to a previously
seen good packet, then you can equally trick a reassembler into stitching
your bad fragment into that packet), or are you more worried about the
DoS from tying lots of memory down in the reassembly cache.
Even with reordering handling, a data structure to record which ranges of
a packet have been seen takes much less memory than storing the complete
fragment bodies. (Just a simple bitmap of 8-byte blocks — the resolution
of iph->frag_off — reduces size by a factor of 64, not counting all the
overhead of a struct sk_buff for each fragment in the queue. Or you
could re-use the rbtree-based code from the reassembler, just with a
freshly allocated node containing only offset & length, instead of the
whole SKB.)
And having a BPF helper effectively consume the skb is awkward, as you
noted; someone is likely to decide that skb_copy() is too slow, try to
add ctx invalidation, and thereby create a whole new swathe of potential
correctness and security issues.
Plus, imagine trying to support this in a hardware-offload XDP device.
They'd have to reimplement the entire frag cache, which is a much bigger
attack surface than just a frag validator, and they couldn't leverage
the battle-hardened kernel implementation.
> And make it trivial for the next person that comes
> along to do the right thing.
Fwiw the validator approach could *also* be a helper, it doesn't have to
be something the BPF developer writes for themselves.
But if after thinking about the possibility you still prefer your way, I
won't try to stop you — I just wanted to ensure it had been considered.
-ed
Powered by blists - more mailing lists