lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Feb 2019 14:08:46 +0000
From:   Vlad Buslov <>
To:     Cong Wang <>
CC:     Linux Kernel Network Developers <>,
        Jamal Hadi Salim <>,
        Jiri Pirko <>,
        David Miller <>
Subject: Re: [PATCH net-next 09/12] net: sched: flower: handle concurrent tcf
 proto deletion

On Mon 18 Feb 2019 at 20:47, Cong Wang <> wrote:
> On Wed, Feb 13, 2019 at 11:47 PM Vlad Buslov <> wrote:
>> Without rtnl lock protection tcf proto can be deleted concurrently. Check
>> tcf proto 'deleting' flag after taking tcf spinlock to verify that no
>> concurrent deletion is in progress. Return EAGAIN error if concurrent
>> deletion detected, which will cause caller to retry and possibly create new
>> instance of tcf proto.
> Please state the reason why you prefer retry over locking the whole
> tp without retrying, that is why and how it is better?
> Personally I always prefer non-retry logic, because it is very easy
> to understand and justify its correctness.
> As you prefer otherwise, please share your reasoning in changelog.
> Thanks!

At the moment filter removal code is implemented by cls API in following

1) tc_del_tfilter() obtains opaque void pointer to filter by calling

2) Pass filter pointer to tfilter_del_notify() which prepares skb with
all necessary info about filter that is being removed and...

3) ... calls tp->ops->delete() to actually delete filter.

Between 1) and 3) filter can be removed concurrently and there is
nothing we can do about it in flower, besides account for that with some
kind of retry logic. I will explain why I prefer cls API to not just
lock whole classifier instance when modifying it in any way in reply to
cls API patch "net: sched: protect filter_chain list with
filter_chain_lock mutex" discussion.

Powered by blists - more mailing lists