[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5d3190a8-b0f2-f695-564f-318f1d1e4a0c@nvidia.com>
Date: Mon, 23 Aug 2021 18:42:20 +0300
From: Nikolay Aleksandrov <nikolay@...dia.com>
To: Ido Schimmel <idosch@...sch.org>,
Vladimir Oltean <olteanv@...il.com>
Cc: Vladimir Oltean <vladimir.oltean@....com>, netdev@...r.kernel.org,
Jakub Kicinski <kuba@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Roopa Prabhu <roopa@...dia.com>, Andrew Lunn <andrew@...n.ch>,
Florian Fainelli <f.fainelli@...il.com>,
Vivien Didelot <vivien.didelot@...il.com>,
Vadym Kochan <vkochan@...vell.com>,
Taras Chornyi <tchornyi@...vell.com>,
Jiri Pirko <jiri@...dia.com>, Ido Schimmel <idosch@...dia.com>,
UNGLinuxDriver@...rochip.com,
Grygorii Strashko <grygorii.strashko@...com>,
Marek Behun <kabel@...ckhole.sk>,
DENG Qingfang <dqfext@...il.com>,
Kurt Kanzenbach <kurt@...utronix.de>,
Hauke Mehrtens <hauke@...ke-m.de>,
Woojung Huh <woojung.huh@...rochip.com>,
Sean Wang <sean.wang@...iatek.com>,
Landen Chao <Landen.Chao@...iatek.com>,
Claudiu Manoil <claudiu.manoil@....com>,
Alexandre Belloni <alexandre.belloni@...tlin.com>,
George McCollister <george.mccollister@...il.com>,
Ioana Ciornei <ioana.ciornei@....com>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>,
Lars Povlsen <lars.povlsen@...rochip.com>,
Steen Hegelund <Steen.Hegelund@...rochip.com>,
Julian Wiedmann <jwi@...ux.ibm.com>,
Karsten Graul <kgraul@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Ivan Vecera <ivecera@...hat.com>,
Vlad Buslov <vladbu@...dia.com>,
Jianbo Liu <jianbol@...dia.com>,
Mark Bloch <mbloch@...dia.com>, Roi Dayan <roid@...dia.com>,
Tobias Waldekranz <tobias@...dekranz.com>,
Vignesh Raghavendra <vigneshr@...com>,
Jesse Brandeburg <jesse.brandeburg@...el.com>
Subject: Re: [PATCH v2 net-next 0/5] Make SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE
blocking
On 23/08/2021 18:18, Ido Schimmel wrote:
> On Mon, Aug 23, 2021 at 05:29:53PM +0300, Vladimir Oltean wrote:
>> On Mon, Aug 23, 2021 at 03:16:48PM +0300, Ido Schimmel wrote:
>>> I was thinking about the following case:
>>>
>>> t0 - <MAC1,VID1,P1> is added in syscall context under 'hash_lock'
>>> t1 - br_fdb_delete_by_port() flushes entries under 'hash_lock' in
>>> response to STP state. Notifications are added to 'deferred' list
>>> t2 - switchdev_deferred_process() is called in syscall context
>>> t3 - <MAC1,VID1,P1> is notified as blocking
>>>
>>> Updates to the SW FDB are protected by 'hash_lock', but updates to the
>>> HW FDB are not. In this case, <MAC1,VID1,P1> does not exist in SW, but
>>> it will exist in HW.
>>>
>>> Another case assuming switchdev_deferred_process() is called first:
>>>
>>> t0 - switchdev_deferred_process() is called in syscall context
>>> t1 - <MAC1,VID,P1> is learned under 'hash_lock'. Notification is added
>>> to 'deferred' list
>>> t2 - <MAC1,VID1,P1> is modified in syscall context under 'hash_lock' to
>>> <MAC1,VID1,P2>
>>> t3 - <MAC1,VID1,P2> is notified as blocking
>>> t4 - <MAC1,VID1,P1> is notified as blocking (next time the 'deferred'
>>> list is processed)
>>>
>>> In this case, the HW will have <MAC1,VID1,P1>, but SW will have
>>> <MAC1,VID1,P2>
>>
>> Ok, so if the hardware FDB entry needs to be updated under the same
>> hash_lock as the software FDB entry, then it seems that the goal of
>> updating the hardware FDB synchronously and in a sleepable manner is if
>> the data path defers the learning to sleepable context too. That in turn
>> means that there will be 'dead time' between the reception of a packet
>> from a given {MAC SA, VID} flow and the learning of that address. So I
>> don't think that is really desirable. So I don't know if it is actually
>> realistic to do this.
>>
>> Can we drop it from the requirements of this change, or do you feel like
>> it's not worth it to make my change if this problem is not solved?
>
> I didn't pose it as a requirement, but as a desirable goal that I don't
> know how to achieve w/o a surgery in the bridge driver that Nik and you
> (understandably) don't like.
>
> Regarding a more practical solution, earlier versions (not what you
> posted yesterday) have the undesirable properties of being both
> asynchronous (current state) and mandating RTNL to be held. If we are
> going with the asynchronous model, then I think we should have a model
> that doesn't force RTNL and allows batching.
>
> I have the following proposal, which I believe solves your problem and
> allows for batching without RTNL:
>
> The pattern of enqueuing a work item per-entry is not very smart.
> Instead, it is better to to add the notification info to a list
> (protected by a spin lock) and scheduling a single work item whose
> purpose is to dequeue entries from this list and batch process them.
>
> Inside the work item you would do something like:
>
> spin_lock_bh()
> list_splice_init()
> spin_unlock_bh()
>
> mutex_lock() // rtnl or preferably private lock
> list_for_each_entry_safe()
> // process entry
> cond_resched()
> mutex_unlock()
>
> In del_nbp(), after br_fdb_delete_by_port(), the bridge will emit some
> new blocking event (e.g., SWITCHDEV_FDB_FLUSH_TO_DEVICE) that will
> instruct the driver to flush all its pending FDB notifications. You
> don't strictly need this notification because of the
> netdev_upper_dev_unlink() that follows, but it helps in making things
> more structured.
>
I was also thinking about a solution along these lines, I like this proposition.
> Pros:
>
> 1. Solves your problem?
> 2. Pattern is not worse than what we currently have
> 3. Does not force RTNL
> 4. Allows for batching. For example, mlxsw has the ability to program up
> to 64 entries in one transaction with the device. I assume other devices
> in the same grade have similar capabilities
Batching would help a lot even if we don't remove rtnl, on loaded systems rtnl itself
is a bottleneck and we've seen crazy delays in commands because of contention. That
coupled with the ability to program multiple entries would be a nice win.
>
> Cons:
>
> 1. Asynchronous
> 2. Pattern we will see in multiple drivers? Can consider migrating it
> into switchdev itself at some point
> 3. Something I missed / overlooked
> >> There is of course the option of going half-way too, just like for
>> SWITCHDEV_PORT_ATTR_SET. You notify it once, synchronously, on the
>> atomic chain, the switchdev throws as many errors as it can reasonably
>> can, then you defer the actual installation which means a hardware access.
>
> Yes, the above proposal has the same property. You can throw errors
> before enqueueing the notification info on your list.
>
Thanks,
Nik
Powered by blists - more mailing lists