lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANxWus8=CZ8Y1GvqKFJHhdxun9gB8v1SP0XNZ7SMk4oDvkmEww@mail.gmail.com>
Date:   Thu, 30 Apr 2020 14:40:01 +0200
From:   Václav Zindulka <vaclav.zindulka@...pnet.cz>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: iproute2: tc deletion freezes whole server

On Wed, Apr 15, 2020 at 5:01 PM Václav Zindulka
<vaclav.zindulka@...pnet.cz> wrote:
> > > > The problem is actually more complicated than I thought, although it
> > > > needs more work, below is the first pile of patches I have for you to
> > > > test:
> > > >
> > > > https://github.com/congwang/linux/commits/qdisc_reset
> > > >
> > > > It is based on the latest net-next branch. Please let me know the result.
> > >
> > > I have applied all the patches in your four commits to my custom 5.4.6
> > > kernel source. There was no change in the amount of fq_codel_reset
> > > calls. Tested on ifb, RJ45 and SFP+ interfaces.
> >
> > It is true my patches do not reduce the number of fq_codel_reset() calls,
> > they are intended to reduce the CPU time spent in each fq_codel_reset().
> >
> > Can you measure this? Note, you do not have to add your own printk()
> > any more, because my patches add a few tracepoints, especially for
> > qdisc_reset(). So you can obtain the time by checking the timestamps
> > of these trace events. Of course, you can also use perf trace like you
> > did before.
>
> Sorry for delayed responses. We were moving to a new house so I didn't
> have much time to test it. I've measured your pile of patches applied
> vs unpatched kernel. Result is a little bit better, but it is only
> about 1s faster. Results are here. Do you need any additional reports
> or measurements of other interfaces?
> https://github.com/zvalcav/tc-kernel/tree/master/20200415 I've
> recompiled the kernel without printk which had some overhead too.

Hello Cong,

did you have any time to look at it further? I'm just asking since my
boss wants me to give him some verdict. I've started to study eBPF and
XDP in the meantime so we have an alternative in case there won't be a
solution to this problem.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ