lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <ec7a86ec-56e0-7846-ed02-337850fc8478@wkks.org> Date: Thu, 23 May 2019 15:06:11 -0500 From: Bill Carlson <billcarlson@...s.org> To: Jay Vosburgh <jay.vosburgh@...onical.com> Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org> Subject: Re: bonding-devel mail list? On 5/23/19 11:46 AM, Jay Vosburgh wrote: > As far as I'm aware, nesting bonds has no practical benefit; do > you have a use case for doing so? > > Use case is very specific, but needed in my situation until some switches are stabilized. Switches A1..Ax provide LACP, 40G. These are unstable, lose link on one or more interfaces or drop completely. A single bond to the A switches was acceptable at first, including when one interface was down for quite a while. Then all A switches dropped. Switches B1..Bx provide no LACP, 10G. These are sitting and connected anyway, already in place for backup. All are on the same layer two, as in any MAC is visible on any switch. Goal is to use A switches primarily, and drop back to B _IF_ A are completely down. As long as one interface is active on A, that will be used. I assume LACP and active-passive can't be used in the same bond, interested to hear if I'm wrong. My setup I achieved: bond0 -> switches B, multiple interfaces, active-passive bond1 -> switches A, multiple interfaces, LACP bond10 -> slaves bond0 and bond1, active-passive Various VLANs are using bond10. Options to bonding: bond0: mode=1 fail_over_mac=none miimon=100 bond1: mode=4 lacp_rate=1 miimon=100 bond10: mode=1 fail_over_mac=1 primary=bond1 updelay=10000 miimon=100 (I should probably change to arp monitoring, I know.) updelay in place because LACP takes a long time to link. Making sure the MACs switched was the key. Network performance tests via iperf3 look good, including when dropping bond1. Unfortunately, target test system was on bond0, as its A switches were down. The only, critical, test I haven't been able to perform is physically dropping A links, can't reach that far. :) -- Bill Carlson Anything is possible, given Time and Money.
Powered by blists - more mailing lists