lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D7FAFC5.9080101@iki.fi>
Date:	Tue, 15 Mar 2011 20:28:21 +0200
From:	Timo Teräs <timo.teras@....fi>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	Doug Kehn <rdkehn@...oo.com>, netdev@...r.kernel.org
Subject: Re: Multicast Fails Over Multipoint GRE Tunnel

On 03/15/2011 06:36 PM, Timo Teräs wrote:
> On 03/15/2011 05:34 PM, Eric Dumazet wrote:
>> (Timo mentioned : 
>> 	If the NOARP packets are not dropped, ipgre_tunnel_xmit() will
>> 	take rt->rt_gateway (= NBMA IP) and use that for route
>> 	look up (and may lead to bogus xfrm acquires).)
>>
>> Is the following works for you ?
> 
> I have memory that _header() is called with daddr being valid pointer,
> but pointing to zero memory. So basically my situation would break with
> this.

Ok, I've gone through now the code paths. And I believe I made
originally the assumption that ipgre_tunnel_xmit would should not ever
get tiph->daddr == 0 if we got ipgre_header() call.

However, what actually happens is for (NOARP interfaces) in arp.c:
 - unicast traffic gets NOARP entries mapped to dev->dev_addr (in gre
case it's the tunnel 'local' address)
 - multicast gets mapped to dev->broadcast

And if we create gre tunnel without local or remote address we end up
getting the NOARP entries with hwaddr 0.0.0.0.

Now, for unicast traffic it's mostly pointless. If the tunnel was
locally bound the packets would never leave: they'd get NOARP entry for
local address. And if it's locally unbound, the packets get rt_gateway,
which is pretty confusing routing wise (it apparently assumes your link
device has same ipv4 subnet as the gre device).

On multicast side it makes a bit more sense to map multicast groups. And
this happened implicitly.

IMHO, we should fix the arp code in ipv4 and ipv6 to do proper
ARPHRD_IPGRE mappings so that the _header() gets called with proper
data. I think the multicast-to-same-multicast group mapping makes sense.
But do not really know what to do with unicast packets sent to gre
interface with NOARP and no link broadcast IP address.

Actually this was my problem: the unicast packets for gre interface with
NOARP flag resulted in trying to send packets out. So I could probably
just fix this by creating my gre interface *with* the ARP flag in the
first place.

But is there any sensible thing to do with the unicast packets in above
case? I think those should be just dropped. Or does someone think that
it'd ever make sense to take the inner unicast address and use it as the
outer address too? If so, my patch should be just reverted.

My honest thought is to keep the ip_gre header check as it is currently
and fix arp code in ipv4 / neighbour code in ipv6 to do the proper NOARP
mappings as needed. We might be able to get rid of the huge protocol
dependent "tiph->daddr == 0" check in xmit path this way, and make sure
that the header is set properly.

This would also allow us to see proper NOARP entries when doing "ip
neigh show nud noarp". Now it will just show 0.0.0.0 entries with gre
devices without telling where to the packets are actually being sent to.

Any thoughts?

Cheers,
  Timo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ