lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Feb 2019 16:04:54 +0000
From:   Vlad Buslov <vladbu@...lanox.com>
To:     Cong Wang <xiyou.wangcong@...il.com>
CC:     Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        Jiri Pirko <jiri@...nulli.us>,
        David Miller <davem@...emloft.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH net-next v4 05/17] net: sched: traverse chains in block
 with tcf_get_next_chain()


On Mon 18 Feb 2019 at 18:26, Cong Wang <xiyou.wangcong@...il.com> wrote:
> On Mon, Feb 18, 2019 at 2:07 AM Vlad Buslov <vladbu@...lanox.com> wrote:
>>
>> Hi Cong,
>>
>> Thanks for reviewing!
>>
>> On Fri 15 Feb 2019 at 22:21, Cong Wang <xiyou.wangcong@...il.com> wrote:
>> > (Sorry for joining this late.)
>> >
>> > On Mon, Feb 11, 2019 at 12:56 AM Vlad Buslov <vladbu@...lanox.com> wrote:
>> >> @@ -2432,7 +2474,11 @@ static int tc_dump_chain(struct sk_buff *skb, struct netlink_callback *cb)
>> >>         index_start = cb->args[0];
>> >>         index = 0;
>> >>
>> >> -       list_for_each_entry(chain, &block->chain_list, list) {
>> >> +       for (chain = __tcf_get_next_chain(block, NULL);
>> >> +            chain;
>> >> +            chain_prev = chain,
>> >> +                    chain = __tcf_get_next_chain(block, chain),
>> >> +                    tcf_chain_put(chain_prev)) {
>> >
>> > Why do you want to take the block->lock in each iteration
>> > of the loop rather than taking once for the whole loop?
>>
>> This loop calls classifier ops callback in tc_chain_fill_node(). I don't
>> call any classifier ops callbacks while holding block or chain lock in
>> this change because the goal is to achieve fine-grained locking for data
>> structures used by filter update path. Locking per-block or per-chain is
>> much coarser than taking reference counters to parent structures and
>> allowing classifiers to implement their own locking.
>
> That is the problem, when we have N filter chains in a block, you
> lock and unlock mutex N times... And what __tcf_get_next_chain()
> does is basically just retrieving the next entry in the list, so the
> overhead of mutex is likely more than the list operation itself in
> contention situation.
>
> Now I can see why you complained about mutex before, it is
> how you use it, not actually its own problem. :)
>
>>
>> In this case call to ops->tmplt_dump() is probably quite fast and its
>> execution time doesn't depend on number of filters on the classifier, so
>> releasing block->lock on each iteration doesn't provide much benefit, if
>> at all. However, it is easier for me to reason about locking correctness
>> in this refactoring by following a simple rule that no locks (besides
>> rtnl mutex) can be held when calling classifier ops callbacks.
>
> Well, for me, a hierarchy locking is always simple when you take
> them in the right order, that is locking the larger-scope lock first
> and then smaller-scope one.
>
> The way you use the locking here is actually harder for me to
> review, because it is hard to valid its atomicity when you unlock
> the larger scope lock and re-take the smaller scope lock. You
> use refcnt to ensure it will not go way, but that is still far from
> guarantee of the atomicity.
>
> For example, tp->ops->change() which changes an existing
> filter, I don't see you lock either block->lock or
> chain->filter_chain_lock when calling it. How does it even work?

Sorry for not describing it in more details in cover letter. Basic
approach is that cls API obtains references to all necessary data
structures (Qdisc, block, chain, tp) and calls classifier ops callbacks
after releasing all the locks. This defers all locking and atomicity
concerns to actual classifier implementations. In case of
tp->ops->change() flower classifier uses tp->lock. In case of dump code
(both chain and filter) there is no atomicity with or without my changes
because rtnl lock is released after sending each skb to userspace and
concurrent changes to tc structures can occur in between.

Powered by blists - more mailing lists