[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpU7rXqE0iXQzr5kJx2ab_v6OXmL1drt+VJYJAGoL5dyug@mail.gmail.com>
Date: Fri, 1 Dec 2017 14:07:19 -0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Jiri Pirko <jiri@...nulli.us>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC PATCH] net_sched: bulk free tcf_block
On Fri, Dec 1, 2017 at 3:05 AM, Paolo Abeni <pabeni@...hat.com> wrote:
>
> Thank you for the feedback.
>
> I tested your patch and in the above scenario I measure:
>
> real 0m0.017s
> user 0m0.000s
> sys 0m0.017s
>
> so it apparently works well for this case.
Thanks a lot for testing it! I will test it further. If it goes well I will
send a formal patch with your Tested-by unless you object it.
>
> We could still have a storm of rtnl lock/unlock operations while
> deleting a large tc tree with lot of filters, and I think we can reduce
> them with bulk free, evenutally applying it to filters, too.
>
> That will also reduce the pressure on the rtnl lock when e.g. OVS H/W
> offload pushes a lot of rules/sec.
>
> WDYT?
>
Why this is specific to tc filter? From what you are saying, we need to
batch all TC operations (qdisc, filter and action) rather than just filter?
In short term, I think batching rtnl lock/unlock is a good optimization,
so I have no objection. For long term, I think we need to revise RTNL
lock and probably move it down to each layer, but clearly it requires
much more work.
Thanks.
Powered by blists - more mailing lists