lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1512126307.3155.26.camel@redhat.com>
Date:   Fri, 01 Dec 2017 12:05:07 +0100
From:   Paolo Abeni <pabeni@...hat.com>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        Jiri Pirko <jiri@...nulli.us>,
        "David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC PATCH] net_sched: bulk free tcf_block

On Thu, 2017-11-30 at 23:14 -0800, Cong Wang wrote:
> On Wed, Nov 29, 2017 at 6:25 AM, Paolo Abeni <pabeni@...hat.com> wrote:
> > Currently deleting qdisc with a large number of children and filters
> > can take a lot of time:
> > 
> > tc qdisc add dev lo root htb
> > for I in `seq 1 1000`; do
> >         tc class add dev lo parent 1: classid 1:$I htb rate 100kbit
> >         tc qdisc add dev lo parent 1:$I handle $((I + 1)): htb
> >         for J in `seq 1 10`; do
> >                 tc filter add dev lo parent $((I + 1)): u32 match ip src 1.1.1.$J
> >         done
> > done
> > time tc qdisc del dev root
> > 
> > real    0m54.764s
> > user    0m0.023s
> > sys     0m0.000s
> > 
> > This is due to the multiple rcu_barrier() calls, one for each tcf_block
> > freed, invoked with the rtnl lock held. Most other network related
> > tasks will block in this timespan.
> 
> Yeah, Eric pointed out this too and I already had an idea to cure
> this.
> 
> As I already mentioned before, my idea is to refcnt the tcf block
> so that we don't need to worry about which is the last. Something
> like the attached patch below, note it is PoC _only_, not even
> compiled yet. And I am not 100% sure it works either, I will look
> deeper tomorrow.

Thank you for the feedback.

I tested your patch and in the above scenario I measure:

real	0m0.017s
user	0m0.000s
sys	0m0.017s

so it apparently works well for this case.

We could still have a storm of rtnl lock/unlock operations while
deleting a large tc tree with lot of filters, and I think we can reduce
them with bulk free, evenutally applying it to filters, too. 

That will also reduce the pressure on the rtnl lock when e.g. OVS H/W
offload pushes a lot of rules/sec.

WDYT?

Cheers,

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ