lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 21 May 2010 23:24:39 -0700
From:	"George B." <georgeb@...il.com>
To:	netdev <netdev@...r.kernel.org>
Subject: VLANs, bonding redux: vlan state does not follow ethernet

Using 2.6.34 I am trying to remove bottlenecks.  Instead of bonding
two ethernet interfaces and applying vlans to the bond, I am applying
the vlans to the ethernet and bonding the vlans creating a separate
bond interface for each vlan.

The trouble now is that the bond interface does not see when the
ethernet interface goes down.  The vlan reports to the bonding driver
that it is up when the ethernet it is connected to is down.  This
results in packet loss through the bond interface as the bond driver
attempts to use that vlan.

eth1 shows having no link:

root@...dbox:/proc/net# ethtool eth1
Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Link partner advertised link modes:  Not reported
        Link partner advertised pause frame use: No
        Link partner advertised auto-negotiation: No
        Speed: Unknown!
        Duplex: Unknown! (255)
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: pumbag
        Wake-on: g
        Current message level: 0x00000001 (1)
        Link detected: no

bonding driver says eth1.99 reports MII status up:

root@...dbox:/proc/net# cat bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0.99
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:26:9e:1c:d3:3e

Slave Interface: eth1.99
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:26:9e:1c:d3:3f

is there some parameter I can give that tells the vlan driver to
follow the state of the interface it is attached to?  Having a vlan
that reports being up all the time even when its underlying interface
is down is less than useful. It would seem intuitive that a vlan's
state would follow that of the interface it is attached to.

root@...dbox:/proc/net# cat vlan/eth0.99
eth0.99  VID: 99         REORDER_HDR: 1  dev->priv_flags: 21
         total frames received           32
          total bytes received         4735
      Broadcast/Multicast Rcvd            0

      total frames transmitted           50
       total bytes transmitted         3852
            total headroom inc            0
           total encap on xmit            0
Device: eth0
INGRESS priority mappings: 0:0  1:0  2:0  3:0  4:0  5:0  6:0 7:0
 EGRESS priority mappings:
root@...dbox:/proc/net# cat vlan/eth1.99
eth1.99  VID: 99         REORDER_HDR: 1  dev->priv_flags: 21
         total frames received            0
          total bytes received            0
      Broadcast/Multicast Rcvd            0

      total frames transmitted            0
       total bytes transmitted            0
            total headroom inc            0
           total encap on xmit            0
Device: eth1
INGRESS priority mappings: 0:0  1:0  2:0  3:0  4:0  5:0  6:0 7:0
 EGRESS priority mappings:

root@...dbox:/proc/net# ping 10.1.99.1
PING 10.1.99.1 (10.1.99.1) 56(84) bytes of data.
64 bytes from 10.1.99.1: icmp_seq=2 ttl=255 time=0.299 ms
64 bytes from 10.1.99.1: icmp_seq=4 ttl=255 time=0.311 ms
64 bytes from 10.1.99.1: icmp_seq=6 ttl=255 time=0.325 ms
64 bytes from 10.1.99.1: icmp_seq=8 ttl=255 time=0.291 ms
64 bytes from 10.1.99.1: icmp_seq=10 ttl=255 time=0.308 ms

George
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ