lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20447.1712795309@famine>
Date: Wed, 10 Apr 2024 17:28:29 -0700
From: Jay Vosburgh <jay.vosburgh@...onical.com>
To: Thomas Bogendoerfer <tbogendoerfer@...e.de>
cc: Andy Gospodarek <andy@...yhouse.net>,
    "David S. Miller" <davem@...emloft.net>,
    Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
    Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
    linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] bonding: 802.3ad: Avoid packet loss when switching aggregator

Thomas Bogendoerfer <tbogendoerfer@...e.de> wrote:

>On Mon, 08 Apr 2024 09:06:11 -0700
>Jay Vosburgh <jay.vosburgh@...onical.com> wrote:
>
>> Thomas Bogendoerfer <tbogendoerfer@...e.de> wrote:
>> 
>> >If selection logic decides to switch to a new aggregator it disables
>> >all ports of the old aggregator, but doesn't enable ports on
>> >the new aggregator. These ports will eventually be enabled when
>> >the next LACPDU is received, which might take some time and without an
>> >active port transmitted frames are dropped. Avoid this by enabling
>> >already collected ports of the new aggregator immediately.
>> >
>> >Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@...e.de>
>> >---
>> > drivers/net/bonding/bond_3ad.c | 7 +++++++
>> > 1 file changed, 7 insertions(+)
>> >
>> >diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
>> >index c6807e473ab7..529e2a7c51e2 100644
>> >--- a/drivers/net/bonding/bond_3ad.c
>> >+++ b/drivers/net/bonding/bond_3ad.c
>> >@@ -1876,6 +1876,13 @@ static void ad_agg_selection_logic(struct aggregator *agg,
>> > 				__disable_port(port);
>> > 			}
>> > 		}
>> >+
>> >+		/* enable ports on new active aggregator */
>> >+		for (port = best->lag_ports; port;
>> >+			port = port->next_port_in_aggregator) {
>> >+			__enable_port(port);
>> >+		}
>> >+  
>> 
>> 	I think this will do the wrong thing if the port in question is
>> not in a valid state to send or receive (i.e., it is not one of
>> COLLECTING_DISTRIBUTING, COLLECTING, or DISTRIBUTING).
>> 
>> 
>> 	As it happens, this situation, except for the case of individual
>> ports, is handled just below this code:
>> 
>> 	/* if the selected aggregator is of join individuals
>> 	 * (partner_system is NULL), enable their ports
>> 	 */
>> 	active = __get_active_agg(origin);
>> 
>> 	if (active) {
>> 		if (!__agg_has_partner(active)) {
>> 			for (port = active->lag_ports; port;
>> 			     port = port->next_port_in_aggregator) {
>> 				__enable_port(port);
>> 			}
>> 			*update_slave_arr = true;
>> 		}
>> 	}
>> 
>> 	rcu_read_unlock();
>> 
>> 	FWIW, looking at it, I'm not sure that "__agg_has_partner" is
>> the proper test for invididual-ness, but I'd have to do a bit of poking
>> to confirm that.  In any event, that's not what you want to change right
>> now.
>> 
>> 	Instead of adding another block that does more or less the same
>> thing, I'd suggest updating this logic to include tests for C_D, C, or D
>> states, and enabling the ports if that is the case.  Probably something
>> like (I have not tested or compiled this at all):
>> 
>> 	if (active) {
>> 		if (!__agg_has_partner(active)) {
>> 			[ ... the current !__agg_has_partner() stuff ]
>> 		} else {
>
>moving it here will run this part on every call of ad_agg_selection_logic(),
>but it would be only relevant, if there is a switch to a different aggregator.

	True; that could be tested for, though, as the original
aggregator is stored in the variable "origin".  This is probably moot in
light of my comments below.

>> 			for (port = active->lag_ports; port;
>> 			     port = port->next_port_in_aggregator) {
>> 				switch (port->sm_mux_state) {
>> 				case AD_MUX_DISTRIBUTING:
>> 				case AD_MUX_COLLECTING_DISTRIBUTING:
>> 					ad_enable_collecting_distributing(port,
>> 							update_slave_arr);
>> 					port->ntt = true;
>> 					break;
>> 				case AD_MUX_COLLECTING:
>> 					ad_enable_collecting(port);
>> 					ad_disable_distributing(port, update_slave_arr);
>> 					port->ntt = true;
>> 					break;
>> 				default:
>> 					break;
>> 		}
>
>I've tried this in my test environment and it doesn't fixed the issue
>I'm seeing, because the port of the new aggregator is still in AD_MUX_WAITING...
>
>The issue is that after bringing the bond up it happens that the bond link
>is up, but no slave can transmit. This happens exactly when the aggregator
>is changed due to timing of the received lacpdu. So if enabling the port
>in AD_MUX_WAITING is wrong, what are other ways to fix this problem ?

	Ok, I've looked through the code a bit more and I understand at
least some of what's going on.  I recall testing this some years ago to
insure that failover between aggregators functions correctly, although I
don't recall looking into loss rates during the failover.

	First, I'm not sure why your port is in WAITING state, unless
it's simply that your test is happening very quickly after the port is
added to the bond.  The standard (IEEE 802.1AX-2014 6.4.15) requires
ports to remain in WAITING state for 2 seconds when transitioning from
DETACHED to ATTACHED state (to limit thrashing when multiple ports are
added in a short span of time).

	You mention the issue happens when the aggregator changes; do
you have a detailed sequence of events that describe how the issue is
induced?

	I also see a potential issue in the handling of READY_N and
READY, although I'd need your test case to determine if it's an actual
problem or just something that looks odd but behaves correctly.

	As for the rest, if your issue revolves around failover between
aggregators in an established bond, then I'd expect the ports to remain
in ATTACHED state when their aggregator is not the active aggregator, as
the state machine logic in ad_mux_machine() won't advance beyond
ATTACHED state in this case, e.g.,

static void ad_mux_machine(struct port *port, bool *update_slave_arr)
{
[...]
		case AD_MUX_ATTACHED:
[...]
				if (port->aggregator->is_active) {
					int state = AD_MUX_COLLECTING_DISTRIBUTING;

	When an aggregator's ports move to COLLECTING, DISTRIBUTING or
COLLECTING_DISTRIBUTING state, the link partner will logically expect
that it may send and receive traffic across the ports in the aggregator.
The standard permits an arbitrary number of aggregators to be active
simultaneously, but bonding isn't able to operate more than one
aggregator at a time within the context of a single bonding interface.

	If this is the crux of the problem, we could potentially change
the state machine logic to run the complete state machine on all ports.
This would need to insure that the "inactive" flag logic works correctly
if ports of an inactive aggregator are in C, D or C_D state.  This
should operate similarly to how the inactive bond interfaces are treated
in active-backup mode.  The LACPDU packets should already be passed
through by bond_handle_frame(), so the question would really be whether
ordinary traffic is handled correctly on the inactive aggregators.

	I think the main code change would largely be removing most or
all of the tests (like the sample above) against aggregator->is_active
in ad_mux_machine(), ad_enable_collecting(), and
ad_enable_collecting_distributing().  I haven't tested this at all, this
is just my speculation from looking at the code.

	-J

---
	-Jay Vosburgh, jay.vosburgh@...onical.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ