lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <loom.20130618T142341-689@post.gmane.org>
Date:	Tue, 18 Jun 2013 12:42:56 +0000 (UTC)
From:	Sergey <sepron@...il.com>
To:	netdev@...r.kernel.org
Subject: redirect host vs ip route cache

Hello guys,

the network scheme is the following:

Servers in network 10.0.0.0/24 -> 10.0.0.1 (core) -> 10.0.0.2 (router with 
NAT to 1.2.3.4) -> internet

As you might have guessed I have the problem with "router with NAT".
It is Debian with kernel 3.2.0-0.bpo.4-amd64.

When I try to ping/telnet/etc. I get a problem.

Example:

Server 10.0.0.123 ping 4.2.2.4:

# ping 4.2.2.4
PING 4.2.2.4 (4.2.2.4) 56(84) bytes of data.
>From 10.0.0.1: icmp_seq=3 Redirect Host(New nexthop: 10.0.0.2)
>From 1.2.3.4 icmp_seq=1 Destination Host Unreachable 

On 10.0.0.2 we can see in ip route cache:

10.0.0.123 from 4.2.2.4 dev eth0  src 1.2.3.4
    cache  ipid 0x3837 rtt 75ms rttvar 60ms cwnd 10 iif eth0.635
4.2.2.4 from 10.0.0.123 tos lowdelay via 10.0.0.2 dev eth0.635  src 10.0.0.2 
    cache <src-direct,redirected>  ipid 0x6955 iif eth0
4.2.2.4 from 10.0.0.123 via 10.0.0.2 dev eth0.635  src 10.0.0.2 
    cache <src-direct>  ipid 0x6955 iif eth0
4.2.2.4 via 10.0.0.2 dev eth0.635  src 1.2.3.4
    cache <redirected>  ipid 0x6955
local 1.2.3.4 from 4.2.2.4 dev lo  src 1.2.3.4 
    cache <local>  ipid 0xec85 iif eth0.635

After that, I can flush route cache and everything works fine. When 
everything works, route cache looks like this:

4.2.2.4 via 1.2.3.1 dev eth0.635  src 1.2.3.4
    cache  ipid 0x6955
local 1.2.3.4 from 4.2.2.4 dev lo  src 1.2.3.4 
    cache <local>  ipid 0xecb1 iif eth0.635
10.0.0.123 from 4.2.2.4 dev eth0  src 1.2.3.4 
    cache  ipid 0x386f rtt 75ms rttvar 60ms cwnd 10 iif eth0.635
4.2.2.4 from 10.0.0.123 via 1.2.3.1 dev eth0.635  src 10.0.0.2 
    cache <src-direct>  ipid 0x6955 iif eth0



1) When I tell 10.0.0.123 to use 10.0.0.2 as the default gateway - 
everything works like a charm, without flushing the cache
2) When I have FreeBSD server as 10.0.0.2 - everything works perfectly. 
Bloody route cache.

--
Regards,
Sergey



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ