[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161128213444.GA9858@breakpoint.cc>
Date: Mon, 28 Nov 2016 22:34:44 +0100
From: Florian Westphal <fw@...len.de>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Dmitry Vyukov <dvyukov@...gle.com>,
Florian Westphal <fw@...len.de>,
syzkaller <syzkaller@...glegroups.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
David Miller <davem@...emloft.net>,
Tom Herbert <tom@...bertland.com>,
Alexander Duyck <aduyck@...antis.com>,
Jiri Benc <jbenc@...hat.com>,
Sabrina Dubroca <sd@...asysnail.net>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: net: GPF in eth_header
Eric Dumazet <eric.dumazet@...il.com> wrote:
> > Might be a bug added in commit daaa7d647f81f3
> > ("netfilter: ipv6: avoid nf_iterate recursion")
> >
> > Florian, what do you think of dropping a packet that presumably was
> > mangled badly by nf_ct_frag6_queue() ?
ipv4 definitely frees malformed packets.
In general, I think netfilter should avoid 'silent' drops if possible
and let skb continue, but of course such skbs should not be made worse
as what we ate to begin with...
> > (Like about 48 byte pulled :(, and/or skb->csum changed )
I think this warrants a review of ipv6 reassembly too, bug reported here
is because ipv6 nf defrag is also done on output.
> diff --git a/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c b/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
> index f7aab5ab93a5..508739a3ca2a 100644
> --- a/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
> +++ b/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
> @@ -65,9 +65,9 @@ static unsigned int ipv6_defrag(void *priv,
>
> err = nf_ct_frag6_gather(state->net, skb,
> nf_ct6_defrag_user(state->hook, skb));
> - /* queued */
> - if (err == -EINPROGRESS)
> - return NF_STOLEN;
> + /* queued or mangled ... */
> + if (err)
> + return (err == -EINPROGRESS) ? NF_STOLEN : NF_DROP;
>
> return NF_ACCEPT;
Looks good, we'll need to change some of the errno return codes in
nf_ct_frag6_gather to 0 though for this to work, which should not be too
hard ;)
Powered by blists - more mailing lists