lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240805110629.3259ddad@kaperfahrt.nebelschwaden.de>
Date: Mon, 5 Aug 2024 11:06:29 +0200
From: Ede Wolf <listac@...elschwaden.de>
To: netdev@...r.kernel.org
Subject: ipvlan: different behaviour/path of ip4 and ip6?

Hello,

Hoping, that this is the poper list to bring forth the issue, as I
experience pakets taking a different path depending on the protocol, in
what I consider otherwise to be an identical configuration for both
protocols.  

I've set up ipvlan l3s on a dual homed host, where the ipvlan hosting
interface is not the interface of the link/subnet in question. 
Basically, the ipvlan is bound to eth1, and the link we are interested
in is eth0. Attached to the ipvlan are two container. The local ipvlan
interface is named exactly so. 

With ipv6 I have got this to work, that is, any host connected to the
subnet on eth0 can ping the container, and vice versa, but with ipv4
it doesn't. 

Here is an excerpt of an ipv4 ping from inside a container (10.10.10.1)
to a machine connected to the link on eth0 (192.168.17.1):

07:29:04.978583 eth1  Out IP (tos 0x0, ttl 64, id 41071, offset 0,
flags [DF], proto ICMP (1), length 84) 10.10.10.1 > 192.168.17.1: ICMP
echo request, id 32850, seq 3, length 64

It obviously takes eth1 as egress, as to be expected, and it is just
that one stage, repeating again and again. 

And here is the same ping using ipv6. With :1010::1 being the container
ip and :1a17::1 being the remote host: 

07:31:29.298589 eth0  Out IP6 (flowlabel 0x50657, hlim 64, next-header
ICMPv6 (58) payload length: 64)  > fde7:dead:beef:1a17::1: [icmp6 sum
ok] ICMP6, echo request, id 30366, seq 6

07:31:29.298856 eth0  In  IP6 (flowlabel 0xd783d, hlim 64, next-header
ICMPv6 (58) payload length: 64)  > fde7:dead:beef:1010::1: [icmp6 sum
ok] ICMP6, echo reply, id 30366, seq 6

07:31:29.298866 ipvlan Out IP6 (flowlabel 0xd783d, hlim 63, next-header
ICMPv6 (58) payload length: 64) fde7:dead:beef:1a17::1 >
fde7:dead:beef:1010::1: [icmp6 sum ok] ICMP6, echo reply, id 30366, seq
6

In this case, the eth0 is used as the egress interface directly, what
makes stuff work. 
For both, I would have expected to see eth1 as egress with
paket then being forwarded to eth0. But that may be my lack of
understanding. 

ip forwarding is globally enabled, no paketfilter in place.  And there
is no real difference in pinging the local eth0 interface from within a
container or an outside host on that link. 
The container has its default gateway set to its device for both
protocols. 

Other way round it is the same, request pakets from an outside host
find their way into the container, but the reply gets stuck @eth1 with
ipv4, while ipv6 works.

Sidenote: the local ipvlan interface is reachable from everywhere, only
the container are not accessible with ipv4. 

I have posted this question with a slightly differnt focus and without
success to the arch forum, where I have also given a simple ascii
diagramm of the setup. 

I just do not know, wether it is appropiate to post the url here.
Putting the diagramm in this mail make no sense due to line break

In case that could be helpful and not considered spam, I of course will
hapily do so

Thanks

Ede

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ