lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211026190136.jkxyqi6b7f4i2bfe@skbuf>
Date:   Tue, 26 Oct 2021 19:01:37 +0000
From:   Vladimir Oltean <vladimir.oltean@....com>
To:     Nikolay Aleksandrov <nikolay@...dia.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Roopa Prabhu <roopa@...dia.com>,
        Ido Schimmel <idosch@...dia.com>,
        Jakub Kicinski <kuba@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Andrew Lunn <andrew@...n.ch>,
        Florian Fainelli <f.fainelli@...il.com>,
        Vivien Didelot <vivien.didelot@...il.com>,
        Vladimir Oltean <olteanv@...il.com>,
        Jiri Pirko <jiri@...dia.com>
Subject: Re: [RFC PATCH net-next 00/15] Synchronous feedback on FDB add/del
 from switchdev to the bridge

On Tue, Oct 26, 2021 at 08:10:45PM +0300, Nikolay Aleksandrov wrote:
> On 26/10/2021 19:54, Vladimir Oltean wrote:
> > On Tue, Oct 26, 2021 at 03:20:03PM +0300, Nikolay Aleksandrov wrote:
> >> On 26/10/2021 14:25, Vladimir Oltean wrote:
> >>> On Tue, Oct 26, 2021 at 01:40:15PM +0300, Nikolay Aleksandrov wrote:
> >>>> Hi,
> >>>> Interesting way to work around the asynchronous notifiers. :) I went over
> >>>> the patch-set and given that we'll have to support and maintain this fragile
> >>>> solution (e.g. playing with locking, possible races with fdb changes etc) I'm
> >>>> inclined to go with Ido's previous proposition to convert the hash_lock into a mutex
> >>>> with delayed learning from the fast-path to get a sleepable context where we can
> >>>> use synchronous switchdev calls and get feedback immediately.
> >>>
> >>> Delayed learning means that we'll receive a sequence of packets like this:
> >>>
> >>>             br0--------\
> >>>           /    \        \
> >>>          /      \        \
> >>>         /        \        \
> >>>      swp0         swp1    swp2
> >>>       |            |        |
> >>>    station A   station B  station C
> >>>
> >>> station A sends request to B, station B sends reply to A.
> >>> Since the learning of station A's MAC SA races with the reply sent by
> >>> station B, it now becomes theoretically possible for the reply packet to
> >>> be flooded to station C as well, right? And that was not possible before
> >>> (at least assuming an ageing time longer than the round-trip time of these packets).
> >>>
> >>> And that will happen regardless of whether switchdev is used or not.
> >>> I don't want to outright dismiss this (maybe I don't fully understand
> >>> this either), but it seems like a pretty heavy-handed change.
> >>>
> >>
> >> It will depending on lock contention, I plan to add a fast/uncontended case with
> >> trylock from fast-path and if that fails then queue the fdb, but yes - in general
> > 
> > I wonder why mutex_trylock has this comment?
> > 
> >  * This function must not be used in interrupt context. The
> >  * mutex must be released by the same task that acquired it.
> > 
> >> you are correct that the traffic could get flooded in the queue case before the delayed
> >> learning processes the entry, it's a trade off if we want sleepable learning context.
> >> Ido noted privately that's usually how hw acts anyway, also if people want guarantees
> >> that the reply won't get flooded there are other methods to achieve that (ucast flood
> >> disable, firewall rules etc).
> > 
> > Not all hardware is like that, the switches I'm working with, which
> > perform autonomous learning, all complete the learning process for a
> > frame strictly before they start the forwarding process. The software
> > bridge also behaves like that. My only concern is that we might start
> > building on top of some fundamental bridge changes like these, which
> > might risk a revert a few months down the line, when somebody notices
> > and comes with a use case where that is not acceptable.
> > 
> 
> I should've clarified I was talking about Spectrum as Ido did in a reply.

Ido also said "I assume Spectrum is not special in this regard" and I
just wanted to say this such that we don't end with the wrong impression.
Special or not, to the software bridge it would be new behavior, which
I can only hope will be well received.

> >> Today the reply could get flooded if the entry can't be programmed
> >> as well, e.g. the atomic allocation might fail and we'll flood it again, granted it's much less likely
> >> but still there haven't been any such guarantees. I think it's generally a good improvement and
> >> will simplify a lot of processing complexity. We can bite the bullet and get the underlying delayed
> >> infrastructure correct once now, then the locking rules and other use cases would be easier to enforce
> >> and reason about in the future.
> > 
> > You're the maintainer, I certainly won't complain if we go down this path.
> > It would be nice if br->lock can also be transformed into a mutex, it
> > would make all of switchdev much simpler.
> > 
> 
> That is why we are discussing possible solutions, I don't want to force anything
> but rather reach a consensus about the way forward. There are complexities involved with
> moving to delayed learning too, one would be that the queue won't be a simple linked list
> but probably a structure that allows fast lookups (e.g. rbtree) to avoid duplicating entries,
> we also may end up doing two stage lookup if the main hash table doesn't find an entry
> to close the above scenario's window as much as possible. But as I said I think that we can get
> these correct and well defined, after that we'll be able to reason about future changes and
> use cases easier. I'm still thinking about the various corner cases with delayed learning, so
> any feedback is welcome. I'll start working on it in a few days and will get a clearer
> view of the issues once I start converting the bridge, but having straight-forward locking
> rules and known deterministic behaviour sounds like a better long term plan.

Sorry, I might have to read the code, I don't want to misinterpret your
idea. With what you're describing here, I think that what you are trying
to achieve is to both have it our way, and preserve the current behavior
for the common case, where learning still happens from the fast path.
But I'm not sure that this is the correct goal, especially if you also
add "straightforward locking rules" to the mix.

I think you'd have to explain what is the purpose of the fast path trylock
logic you've mentioned above. So in the uncontended br->hash_lock case,
br_fdb_update() could take that mutex and then what? It would create and
add the FDB entry, and call fdb_notify(), but that still can't sleep.
So if switchdev drivers still need to privately defer the offloading
work, we still need some crazy completion-based mechanism to report
errors like the one I posted here, because in the general sense,
SWITCHDEV_FDB_ADD_TO_DEVICE will still be atomic.

And how do you queue a deletion, do you delete the FDB from the pending
and the main hash table, or do you just create a deletion command to be
processed in deferred context?

And since you'd be operating on the hash table concurrently from the
deferred work and from the fast path, doesn't this mean you'll need to
use some sort of spin_lock_bh from the deferred work, which again would
incur atomic context when you want to notify switchdev? Because with the
mutex_trylock described above, you'd gain exclusivity to the main hash
table, but if you lose, what you need is exclusivity to the pending hash
table. So the deferred context also needs to be atomic when it dequeues
the pending FDB entries and notifies them. I just don't see what we're
winning, how the rtnetlink functions will be any different for the better.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ