lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 18 Apr 2019 16:33:22 +0000
From:   Vlad Buslov <vladbu@...lanox.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
CC:     Vlad Buslov <vladbu@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "jhs@...atatu.com" <jhs@...atatu.com>,
        "xiyou.wangcong@...il.com" <xiyou.wangcong@...il.com>,
        "jiri@...nulli.us" <jiri@...nulli.us>,
        "davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [RFC PATCH net-next] net: sched: flower: refactor reoffload for
 concurrent access


On Wed 17 Apr 2019 at 19:34, Jakub Kicinski <jakub.kicinski@...ronome.com> wrote:
> On Wed, 17 Apr 2019 07:29:36 +0000, Vlad Buslov wrote:
>> On Wed 17 Apr 2019 at 00:49, Jakub Kicinski <jakub.kicinski@...ronome.com> wrote:
>> > On Tue, 16 Apr 2019 17:20:47 +0300, Vlad Buslov wrote:
>> >> @@ -1551,6 +1558,10 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
>> >>  		goto errout_mask;
>> >>
>> >>  	if (!tc_skip_hw(fnew->flags)) {
>> >> +		spin_lock(&tp->lock);
>> >> +		list_add(&fnew->hw_list, &head->hw_filters);
>> >> +		spin_unlock(&tp->lock);
>> >> +
>> >>  		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack);
>> >>  		if (err)
>> >>  			goto errout_ht;
>> >
>> > Duplicated deletes should be fine, but I'm not sure same is true for
>> > adds.  Won't seeing an add with the same cookie twice confuse drivers?
>> >
>> > There's also the minor issue of offloaded count being off in that
>> > case :)
>>
>> Hmmm, okay. Rejecting duplicate cookies should be a trivial change to
>> drivers though. Do you see any faults with this approach in general?
>
> Trivial or not it adds up, the stack should make driver authors' job as
> easy as possible.  The simplest thing to do would be to add a mutex
> around the HW calls.  But that obviously doesn't work for you, cause
> you want multiple outstanding requests to the FW for a single tp, right?
>
> How about a RW lock, that would take R on normal add/replace/del paths
> and W on replays?  That should scale, no?

I've been thinking some more about possible ways to mitigate the
problem. First of all I tried to implement POC of rwlock in flower and
it isn't straightforward because of lock ordering. Observe that
fl_reoffload() is always called with rtnl lock taken (I didn't do any work to
unlock bind/unbind), but fl_change() can be called without rtnl lock and
needs to obtain it before offloading rules. This means that we have
deadlock here, if fl_change() obtains locks in order rwlock --->
rtnl_lock and fl_reoffload() obtains locks in order rtnl_lock --->
rwlock.

Considering this, I tried to improve my solution to remove possibility
of multiple adds of same filter and it seems to me that it would be
enough to move hw_filters list management in flower offloads functions:
add filter to list while holding rtnl lock in fl_hw_replace_filter() and
remove it from list while holding rtnl lock in fl_hw_destroy_filter().
What do you think?

Powered by blists - more mailing lists