[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1512382351.2586.11.camel@redhat.com>
Date: Mon, 04 Dec 2017 11:12:31 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Cong Wang <xiyou.wangcong@...il.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Jiri Pirko <jiri@...nulli.us>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC PATCH] net_sched: bulk free tcf_block
Hi,
On Fri, 2017-12-01 at 14:07 -0800, Cong Wang wrote:
> On Fri, Dec 1, 2017 at 3:05 AM, Paolo Abeni <pabeni@...hat.com> wrote:
> >
> > Thank you for the feedback.
> >
> > I tested your patch and in the above scenario I measure:
> >
> > real 0m0.017s
> > user 0m0.000s
> > sys 0m0.017s
> >
> > so it apparently works well for this case.
>
> Thanks a lot for testing it! I will test it further. If it goes well I will
> send a formal patch with your Tested-by unless you object it.
I'm in late, but I was fine with the above ;)
> > We could still have a storm of rtnl lock/unlock operations while
> > deleting a large tc tree with lot of filters, and I think we can reduce
> > them with bulk free, evenutally applying it to filters, too.
> >
> > That will also reduce the pressure on the rtnl lock when e.g. OVS H/W
> > offload pushes a lot of rules/sec.
> >
> > WDYT?
> >
>
> Why this is specific to tc filter? From what you are saying, we need to
> batch all TC operations (qdisc, filter and action) rather than just filter?
Exactly, the idea would be to batch all the delayed works. I started
with blocks, to somewhat tackle the issue seen on qdisc removal.
> In short term, I think batching rtnl lock/unlock is a good optimization,
> so I have no objection. For long term, I think we need to revise RTNL
> lock and probably move it down to each layer, but clearly it requires
> much more work.
Agreed!
Paolo
Powered by blists - more mailing lists