lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181217102248.GB2096@nanopsycho>
Date:   Mon, 17 Dec 2018 11:22:48 +0100
From:   Jiri Pirko <jiri@...nulli.us>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     Vlad Buslov <vladbu@...lanox.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        David Miller <davem@...emloft.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH net-next v2 01/17] net: sched: refactor
 mini_qdisc_pair_swap() to use workqueue

Sun, Dec 16, 2018 at 07:52:18PM CET, xiyou.wangcong@...il.com wrote:
>On Sun, Dec 16, 2018 at 8:32 AM Vlad Buslov <vladbu@...lanox.com> wrote:
>>
>> On Thu 13 Dec 2018 at 23:32, Cong Wang <xiyou.wangcong@...il.com> wrote:
>> > On Tue, Dec 11, 2018 at 2:19 AM Vlad Buslov <vladbu@...lanox.com> wrote:
>> >>
>> >> As a part of the effort to remove dependency on rtnl lock, cls API is being
>> >> converted to use fine-grained locking mechanisms instead of global rtnl
>> >> lock. However, chain_head_change callback for ingress Qdisc is a sleeping
>> >> function and cannot be executed while holding a spinlock.
>> >
>> >
>> > Why does it have to be a spinlock not a mutex?
>> >
>> > I've read your cover letter and this changelog, I don't find any
>> > answer.
>>
>> My initial implementation used mutex. However, it was changed to
>> spinlock by Jiri's request during internal review.
>>
>
>So what's the answer to my question? :)

Yeah, my concern agains mutexes was that it would be needed to have one
per every block and per every chain. I find it quite heavy and I believe
it is better to use spinlock in those cases. This patch is a side effect
of that. Do you think it would be better to have mutexes instead of
spinlocks?


>
>
>> >
>> >>
>> >> Extend cls API with new workqueue intended to be used for tcf_proto
>> >> lifetime management. Modify tcf_proto_destroy() to deallocate proto
>> >> asynchronously on workqueue in order to ensure that all chain_head_change
>> >> callbacks involving the proto complete before it is freed. Convert
>> >> mini_qdisc_pair_swap(), that is used as a chain_head_change callback for
>> >> ingress and clsact Qdiscs, to use a workqueue. Move Qdisc deallocation to
>> >> tc_proto_wq ordered workqueue that is used to destroy tcf proto instances.
>> >> This is necessary to ensure that Qdisc is destroyed after all instances of
>> >> chain/proto that it contains in order to prevent use-after-free error in
>> >> tc_chain_notify_delete().
>> >
>> >
>> > Please avoid async unless you have to, there are almost always bugs
>> > when playing with deferred workqueue or any other callbacks.
>>
>> Indeed, async Qdisc and tp deallocation introduces additional
>> complexity. What approach would you recommend to make chain_head_change
>> callback atomic?
>
>I don't look into any of your code yet, from my understanding of your
>changelog, it seems all these workqueue stuffs can be gone if you can
>make it a mutex instead of a spinlock.
>
>This is why I stopped here and wait for your answer to my above question.
>
>Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ