lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Apr 2019 16:26:38 +0000
From:   Vlad Buslov <vladbu@...lanox.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
CC:     Vlad Buslov <vladbu@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "jhs@...atatu.com" <jhs@...atatu.com>,
        "xiyou.wangcong@...il.com" <xiyou.wangcong@...il.com>,
        "jiri@...nulli.us" <jiri@...nulli.us>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "john.hurley@...ronome.com" <john.hurley@...ronome.com>
Subject: Re: [PATCH net-next] net: sched: flower: insert filter to ht before
 offloading it to hw


On Wed 10 Apr 2019 at 19:09, Jakub Kicinski <jakub.kicinski@...ronome.com> wrote:
> On Wed, 10 Apr 2019 16:02:17 +0000, Vlad Buslov wrote:
>> On Wed 10 Apr 2019 at 18:48, Jakub Kicinski <jakub.kicinski@...ronome.com> wrote:
>> > On Wed, 10 Apr 2019 14:53:53 +0000, Vlad Buslov wrote:
>> >> >> For my next patch set that unlocks the offloads API I implemented the
>> >> >> algorithm to track reoffload count for each tp that works like this:
>> >> >>
>> >> >> 1. struct tcf_proto is extended with reoffload_count counter that
>> >> >>    incremented each time reoffload is called on particular tp instance.
>> >> >>    Counter is protected by tp->lock.
>> >> >>
>> >> >> 2. struct cls_fl_filter is also extended with reoffload_count counter.
>> >> >>    Its value is set to current tp->reoffload_count when offloading the
>> >> >>    filter.
>> >> >>
>> >> >> 3. After offloading the filter, but before inserting it to idr,
>> >> >>    f->reoffload_count is compared with tp->reoffload_count. If values
>> >> >>    don't match, filter is deleted and -EAGAIN is returned. Cls API
>> >> >>    retries filter insertion on -EAGAIN.
>> >> >
>> >> > Sounds good for add.  Does this solve delete case as well?
>> >> >
>> >> >    CPU 0                       CPU 1
>> >> >
>> >> > __fl_delete
>> >> >   IDR remove
>> >> >                            cb unregister
>> >> >                              hw delete all flows  <- doesn't see the
>> >> >                                                      remove in progress
>> >> >
>> >> >   hw delete  <- doesn't see
>> >> >                 the removed cb
>> >>
>> >> Thanks for pointing that out! Looks like I need to move call to hw
>> >> delete in __fl_delete() function to be executed before idr removal.
>> >
>> > Ack, plus you need to do the same retry mechanism.  Save CB "count"/seq,
>> > hw delete, remove from IDR, if CB "count"/seq changed hw delete again.
>> > Right?
>>
>> Actually, I intended to modify fl_reoffload() to ignore filters with
>> 'deleted' flag set when adding, but I guess reusing 'reoffload_count' to
>> retry fl_hw_destroy_filter() would also work.
>
> Yeah, I don't see how you can ignore deleted safely.  Perhaps lack of
> coffee :)

Well, drivers are supposed to account for double deletion or deletion of
filters that were not successfully offloaded to them. If filter is not
marked as skip_sw, its creation will succeed even if hw callbacks have
failed, but __fl_delete() still calls fl_hw_destroy_filter() on such
filters. The main thing is that we must guarantee that code doesn't
delete a new filter with same key. However, in case of flower classifier
'cookie' is pointer to filter, and filter is freed only when last
reference to it is released, so code is safe in this regard.

So I guess there is nothing wrong with reoffload calling cb()
on all classifier filters (including marked as 'deleted'), if delete
code doesn't miss any of the callbacks afterwards.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ