lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <12128.1326916050@death>
Date:	Wed, 18 Jan 2012 11:47:30 -0800
From:	Jay Vosburgh <fubar@...ibm.com>
To:	Simon Chen <simonchennj@...il.com>
cc:	netdev@...r.kernel.org
Subject: Re: Wrong mac in arp response in bonded interfaces

Simon Chen <simonchennj@...il.com> wrote:

>Hi all,
>
>Something really weird with interface bonding...
>
>I have eth0 and eth1, with MAC address xx:44 and xx:45. The bonded
>interface chose to use xx:45 as its MAC.
>
>I configured an IP on the bonded interface, and try to ping the
>default gw. The ARP from the server for the .1 is answered by the GW.
>The server then sends out ICMP to the GW. The problem is the GW is not
>responding to the ping.

	How much real time is elapsing between the setting up of the
bond, and this ping test?  What are the slaves set up as prior to the
bond being established?  In particular, is one of them (the :44)
assigned the IP address that the bond ends up using?

>I then logged onto the GW (a switch) - apparently, the ARP table on
>the GW shows that my server's IP is associated with xx:44 MAC address.
>So, actually the GW is responding the ICMP, just to the wrong MAC
>dest.
>
>Any idea how the xx:44 MAC somehow polluted the ARP table on my GW?
>How can I make sure my server always sends out packets with xx:45 MAC
>via the bonded interface?

	My first suspicion is that a stale ARP entry on the switch is
hanging around for the :44 MAC address from before the bond was
established on the host.  If you clear the switch's ARP table, does the
problem correct itself or happen again?

	The other possibility that comes to mind is that you're using
balance-alb mode, in which case I suspect what you're seeing is normal
behavior.  The alb mode "assigns" peers to particular slaves of the bond
by sending them tailored ARP messages bearing the MAC of one of the
slaves, and each slave participates on the network under its own MAC
address (I'm simplifying a bit here, but that's basically how it works).

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@...ibm.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ