lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 3 Jun 2011 12:36:32 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	netdev@...r.kernel.org
Cc:	bugzilla-daemon@...zilla.kernel.org,
	bugme-daemon@...zilla.kernel.org, igor@...g.net,
	Stephen Hemminger <shemminger@...ux-foundation.org>
Subject: Re: [Bugme-new] [Bug 36602] New: Bridge fails to work normally
 without net.ipv4.ip_forward=1


(switched to email.  Please respond via emailed reply-to-all, not via the
bugzilla web interface).

On Fri, 3 Jun 2011 19:21:20 GMT
bugzilla-daemon@...zilla.kernel.org wrote:

> https://bugzilla.kernel.org/show_bug.cgi?id=36602
> 
>            Summary: Bridge fails to work normally without
>                     net.ipv4.ip_forward=1
>            Product: Networking
>            Version: 2.5
>     Kernel Version: 2.6.38.7
>           Platform: All
>         OS/Version: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: Other
>         AssignedTo: acme@...stprotocols.net
>         ReportedBy: igor@...g.net
>         Regression: No
> 
> 
> Yes, this seems strange, but it's seems to be true.
> 
> My network scheme is quite simple:
> 
> (host1) <--- 10gbe ---> (bridge host) <--- 10gbe ---> (host2)
> 
> host1 & host2 are actually VMWare ESXi hypervisors, but that's irrelevant in
> this case i think.
> 
> Network adapters are Intel's 82599 10 gig cards on all hosts.
> 
> At the bridge, on each interface i've created a vlan, and then bridged them:
> # vconfig add eth0 102
> # vconfig add eth1 102
> # brctl addbr br0
> # brctl addif br0 eth0.102
> # brctl addif br0 eth1.102
> # ip link set br0 mtu 9000 up
> ...etc...
> 
> At this point, the bridge seems to be working, i can ping between host1 &
> host2, even with jumbo frames without fragmentation.
> 
> BUT when i am trying to use iperf & friends to measure raw tcp speed between
> hosts 1/2, i'm getting something weird like 7-10 MEGABITS per second, or even
> an iperf hang until ctrl+c.
> 
> If i attach an ip address to the bridge, and measure between hosts and the
> bridge, it works flawlessly, rendering 9.8Gbit/s in both directions.
> 
> While trying to find a solution, when i ran out of options, i've set
> net.ipv4.ip_forward to 1, and, SURPRISE, the bridge started to work like a
> charm, at almost 10gig speed.
> 
> What makes it stranger, is that in my kernel, i've turned off all routing code,
> iptables and other stuff, as this host serves primarily as iSCSI target.
> 
> I have little knowledge in kernel's deep internals, but i always thought that
> bridging & routing are on different levels of operation and couldn't affect
> each other (ebtables is an exception, but i don't have it :) ).
> 
> Maybe i'm interpreting the results wrong, but i've ruled out everything else.
> 
> Currently, i can't use this setup as a test ground, i'll try to replicate the
> scheme in a virtual environment to see if other kernels are affected as well.
> 
> Glad to hear any ideas on this.
> 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists