lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 28 Jun 2024 10:04:37 +0300
From: Nikolay Aleksandrov <razor@...ckwall.org>
To: Hangbin Liu <liuhangbin@...il.com>,
 Jay Vosburgh <jay.vosburgh@...onical.com>
Cc: Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
 Andy Gospodarek <andy@...yhouse.net>, "David S. Miller"
 <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
 Paolo Abeni <pabeni@...hat.com>, Ido Schimmel <idosch@...dia.com>,
 Jiri Pirko <jiri@...nulli.us>, Amit Cohen <amcohen@...dia.com>
Subject: Re: [PATCHv3 net-next] bonding: 3ad: send ifinfo notify when mux
 state changed

On 28/06/2024 06:10, Hangbin Liu wrote:
> On Thu, Jun 27, 2024 at 07:24:10AM -0700, Jay Vosburgh wrote:
>> Hangbin Liu <liuhangbin@...il.com> wrote:
>>> Ah.. Yes, that's a sad fact :(
>>
>> 	There are basically two paths that will change the LACP state
>> that's passed up via netlink (the aggregator ID, and actor and partner
>> oper port states): bond_3ad_state_machine_handler(), or incoming
>> LACPDUs, which call into ad_rx_machine().  Administrative changes to the
> 
> Ah, thanks, I didn't notice this. I will also enable lacp notify
> in ad_rx_machine().
> 
>> bond will do it too, like adding or removing interfaces, but those
>> originate in user space and aren't happening asynchronously.
>>
>> 	If you want (almost) absolute reliability in communicating every
>> state change for the state machine and LACPDU processing, I think you'd
>> have to (a) create an object with the changed state, (b) queue it
>> somewhere, then (c) call a workqueue event to process that queue out of
>> line.
> 
> Hmm... This looks too complex. If we store all the states. A frequent flashing
> may consume the memory. If we made a limit for the queue, we may still loosing
> some state changes.
> 
> I'm not sure which way is better.
> 
>>
>>>> It all depends on what are the requirements.
>>>>
>>>> An uglier but lockless alternative would be to poll the slave's sysfs oper state,
>>>> that doesn't require any locks and would be up-to-date.
>>>
>>> Hmm, that's a workaround, but the admin need to poll the state frequently as
>>> they don't know when the state will change.
>>>
>>> Hi Jay, are you OK to add this sysfs in bonding?
>>
>> 	I think what Nik is proposing is for your userspace to poll the
>> /sys/class/net/${DEV}/operstate.
> 

Actually I was talking about:
 /sys/class/net/<bond port>/bonding_slave/ad_actor_oper_port_state
 /sys/class/net/<bond port>/bonding_slave/ad_partner_oper_port_state
etc

Wouldn't these work for you?

> OK. There are 2 scenarios I got.
> 
> 1) the local user want to get the local/partner state and make sure not
> send pkts before they are in DISTRIBUTING state to avoid pkts drop, Or vice
> versa. Only checking link operstate or up/down status is not enough.
> 
> 2) the admin want to get the switch/partner status via LACP status incase
> the switch is crashed.
> 
> Do you have any suggestion for the implementation?
> 
> Thanks
> Hangbin


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ