[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A8AB25A.4000105@iki.fi>
Date: Tue, 18 Aug 2009 16:53:30 +0300
From: Timo Teräs <timo.teras@....fi>
To: Patrick McHardy <kaber@...sh.net>
CC: netfilter-devel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: bad nat connection tracking performance with ip_gre
Patrick McHardy wrote:
> Timo Teräs wrote:
>> LOCALLY GENERATED PACKET, hogs CPU
>> ----------------------------------
>>
>> IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344
>> TOS=0x00 PREC=0x00 TTL=8 ID=41664 DF PROTO=UDP SPT=47920
>> DPT=1234 LEN=1324 UID=1007 GID=1007
>> 1. raw:OUTPUT
>> 2. mangle:OUTPUT
>> 3. filter:OUTPUT
>> 4. mangle:POSTROUTING
>>
>
> Please include the complete output, I need to see the devices logged
> at each hook.
The devices are identical for each hook grouped under same line.
Here are the interesting lines from one packet:
Generation:
raw:OUTPUT:policy:2 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
mangle:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
(the nat hook is called for initial packet only):
nat:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36593 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
filter:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
mangle:POSTROUTING:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324
mangle:POSTROUTING:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
Looped back by multicast routing:
raw:PREROUTING:policy:1 IN=eth1 OUT= MAC= SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324
mangle:PREROUTING:policy:1 IN=eth1 OUT= MAC= SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324
The cpu hogging happens somewhere below this, since the more
multicast destinations I have the more CPU it takes.
Multicast forwarded (I hacked this into the code; but similar
dump happens on local sendto()):
Actually, now that I think, here we should have the inner IP
contents, and not the incomplete outer yet. So apparently
the ipgre_header() messes the network_header position.
mangle:FORWARD:policy:1 IN=eth1 OUT=gre1 SRC=0.0.0.0 DST=re.mo.te.ip LEN=0 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
filter:FORWARD:rule:2 IN=eth1 OUT=gre1 SRC=0.0.0.0 DST=re.mo.te.ip LEN=0 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
ip_gre xmit sends out:
raw:OUTPUT:rule:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
raw:OUTPUT:policy:2 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
mangle:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
(nat hook for initial packets)
nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
- Timo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists