lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87mt0vutzv.fsf@nvidia.com>
Date: Mon, 19 Jun 2023 22:13:07 +0300
From: Vlad Buslov <vladbu@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Saeed Mahameed <saeed@...nel.org>, "David S. Miller"
	<davem@...emloft.net>, Paolo Abeni <pabeni@...hat.com>, Eric Dumazet
	<edumazet@...gle.com>, Saeed Mahameed <saeedm@...dia.com>,
	<netdev@...r.kernel.org>, Tariq Toukan <tariqt@...dia.com>, Gal Pressman
	<gal@...dia.com>
Subject: Re: [net-next 07/15] net/mlx5: Bridge, expose FDB state via debugfs


On Mon 19 Jun 2023 at 12:05, Jakub Kicinski <kuba@...nel.org> wrote:
> On Mon, 19 Jun 2023 21:34:02 +0300 Vlad Buslov wrote:
>> > Looks like my pw-bot shenanigans backfired / crashed, patches didn't
>> > get marked as Changes Requested and Dave applied the series :S
>> >
>> > I understand the motivation but the information is easy enough to
>> > understand to potentially tempt a user to start depending on it for
>> > production needs. Then another vendor may get asked to implement
>> > similar but not exactly the same set of stats etc. etc.  
>> 
>> That could happen (although consider that bridge offload functionality
>> significantly predates mlx5 implementation and apparently no one really
>> needed that until now), but such API would supplement, not replace the
>> debugfs since we would like to have per-eswitch FDB state exposed
>> together with our internal flags and everything as explained in my
>> previous email.
>
> Because crossing between eswitches incurs additional cost?

It is not about performance. I install multiple steering rules (one per
eswitch), I would like to understand which one is processing the packets
when something goes wrong (main or peer). User/field engineer complains
that some FDB is (not) aged out according to the expectations, I would
like them to dump the file several times while running traffic to see
how the lastused and counters changed during that. Just the basic
debugging stuff because, again, ConnectX doesn't implement 802.1D in
hardware so all the FDB management is done purely in software and we
need a way to expose the state.

>
>> > Do you have customer who will need this?  
>> 
>> Yes. But strictly for debugging (by human), not for building some
>> proprietary weird user-space switch-controller application that would
>> query this in normal mode of operation, if I understand your concern
>> correctly.
>> 
>> > At the very least please follow up to make the files readable to only
>> > root. Normal users should never look at debugfs IMO.  
>> 
>> Hmm, all other debugfs' in mlx5 that I tend to use for switching-related
>> functionality debugging seems to be 0444 (lag, steering, tc hairpin).
>> Why would this one be any different?
>
> Querying the stats seems generally useful, so I'd like to narrow down
> the access as much as possible. This way if the usage spreads we'll hear
> complaints and can go back to creating a more appropriate API.

Ack.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ