[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpV-0f=yX3P=ZD7_-mBvZZn57MGmFxrHqT3U3g+p_mKyJQ@mail.gmail.com>
Date: Mon, 30 Mar 2020 22:59:56 -0700
From: Cong Wang <xiyou.wangcong@...il.com>
To: Václav Zindulka <vaclav.zindulka@...pnet.cz>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: iproute2: tc deletion freezes whole server
On Sat, Mar 28, 2020 at 6:04 AM Václav Zindulka
<vaclav.zindulka@...pnet.cz> wrote:
>
> On Fri, Mar 27, 2020 at 11:35 AM Václav Zindulka
> <vaclav.zindulka@...pnet.cz> wrote:
> >
> > Your assumption is not totally wrong. I have added some printks into
> > fq_codel_reset() function. Final passes during deletion are processed
> > in the if condition you added in the patch - 13706. Yet the rest and
> > most of them go through regular routine - 1768074. 1024 is value of i
> > in for loop.
>
> Ok, so I went through the kernel source a little bit. I've found out
> that dev_deactivate is called only for interfaces that are up. My bad
> I forgot that after deactivation of my daemon ifb interfaces are set
> to down. Nevertheless after setting it up and doing perf record on
> ifb0 numbers are much lower anyway. 13706 exits through your condition
> added in patch. 41118 regular exits. I've uploaded perf report here
> https://github.com/zvalcav/tc-kernel/tree/master/20200328
>
> I've also tried this on metallic interface on different server which
> has a link on it. There were 39651 patch exits. And 286412 regular
> exits. It is more than ifb interface, yet it is way less than sfp+
> interface and behaves correctly.
Interesting, at the point of dev_deactivate() is called, the refcnt
should not be zero, it should be at least 1, so my patch should
not affect dev_deactivate(), it does affect the last qdisc_put()
after it.
Of course, my intention is indeed to eliminate all of the
unnecessary memset() in the ->reset() before ->destroy().
I will provide you a complete patch tomorrow if you can test
it, which should improve hfsc_reset() too.
Thanks.
Powered by blists - more mailing lists