[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGv8E+yuNz33m4rGuPWrsjtGB_GHHFCX9QPFYGcNiwJ8gmQJfw@mail.gmail.com>
Date: Sun, 12 Nov 2017 15:29:49 +0600
From: "Sergey K." <simkergey@...il.com>
To: netdev@...r.kernel.org
Subject: Re: High CPU load by native_queued_spin_lock_slowpath
After 1 month and 2 weeks I found a solution :)!
The main idea is to redirect outgoing traffic to ifb device from every
queue of the real eth interface.
Example:
tc qdisc add dev eth0 root handle 1: mq
tc qdisc add dev eth0 parent 1:1 handle 8001: htb
tc filter add dev eth0 parent 8001: u32 ...... action mirred egress
redirect dev ifb0
...
tc qdisc add dev eth0 parent 1:4 handle 8004: htb
tc filter add dev eth0 parent 8004: u32 ...... action mirred egress
redirect dev ifb0
2017-10-10 18:00 GMT+06:00 Sergey K. <simkergey@...il.com>:
> I'm using Debian 9(stretch edition) kernel 4.9., hp dl385 g7 server
> with 32 cpu cores. NIC queues are tied to processor cores. Server is
> shaping traffic (iproute2 and htb discipline + skbinfo + ipset + ifb)
> and filtering some rules by iptables.
>
> At that moment, when traffic goes up about 1gbit/s cpu is very high
> loaded. Perf tool tells me that kernel module
> native_queued_spin_lock_slowpath loading cpu about 40%.
>
> After several hours of searching, I found that if I remove the htb
> discipline from ifb0, the high load goes down.
> Well, I think that problem with classify and shaping by htb.
>
> Who knows how to solve?
Powered by blists - more mailing lists