lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <a56eee49-49dc-1e61-19a4-6dfb6bd66f3e@nurealm.net>
Date:   Thu, 4 Jul 2019 11:11:21 -0600
From:   James Feeney <james@...ealm.net>
To:     netdev@...r.kernel.org
Subject: "local" interfaces, in forwarding state, are mutually "blind", and
 fail to connect

I have a question - maybe someone can point me in the right direction?

When there exist two or more "local" interfaces on the "host" system, where sysctl "net.ipv4.conf.<blah>.forwarding=1" has been set, and where each interface has an IP address on a different subnet, then, when a frame arrives at an interface, and is addressed to an IP address on some other "local" interface, that frame is never actually delivered to this other "local" interface, but instead, is "looped-back" at the connected "local" interface, and given a *fake* "source" IP address, as if the response had actually been generated by the other "local" interface.

When the incoming frame is of an ICMP echo request, and the outgoing response is an ICMP echo reply, this "fake source address" behavior seems to be of no consequence, except that an interface in a "down" state will still respond to ping.  Parenthetically, then, what is the definition of the "down" state?

However, when the incoming request is, as an example, a domain query, addressed to the IP address of the daemon, and the domain daemon is only binding to the *other* "local" interface, then the domain request is never delivered to this other "local" interface, the one actually addressed, and the frame is never received by the daemon.  The request fails, and there is no response.

At first glance this kernel behavior seems "broken".  Should the frame not *actually* be delivered to this *other* "local" interface?  Why should this "local"/"internal" packet routing fail?  Is that "on purpose"?

Now, the simplest solution to the problem is to just have the daemon also bind to all of whichever "local" interfaces are meant to receive these domain requests, in this example.  But still, I am wondering if this failure to actually deliver the frame, to the *other* "local" interface, that was actually addressed, is not some kind of improper behavior.

And then, a "follow-up" question, is there some otherwise simplistic manual reconfiguration of the kernel routing tables that would achieve the kind of behavior I would naively expect, that the incoming frame from one "local" interface would actually be delivered to the *other* "local" interface, that was *actually* addressed, so that the domain daemon, in this example, would be able to respond to the request, even when that daemon were only "listening" on the *other* "local" interface, the one addressed?  And, without using a bridge interface, bridging these separate "local" interfaces, intended to be on different subnets?

Thanks
James

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ