lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Oct 2017 07:07:12 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     "Sergey K." <simkergey@...il.com>
Cc:     netdev@...r.kernel.org
Subject: Re: High CPU load by native_queued_spin_lock_slowpath

On Tue, 2017-10-10 at 18:00 +0600, Sergey K. wrote:
> I'm using Debian 9(stretch edition) kernel 4.9., hp dl385 g7 server
> with 32 cpu cores. NIC queues are tied to processor cores. Server is
> shaping traffic (iproute2 and htb discipline + skbinfo + ipset + ifb)
> and filtering some rules by iptables.
> 
> At that moment, when traffic goes up about 1gbit/s cpu is very high
> loaded. Perf tool tells me that kernel module
> native_queued_spin_lock_slowpath loading cpu about 40%.
> 
> After several hours of searching, I found that if I remove the htb
> discipline from ifb0, the high load goes down.
> Well, I think that problem with classify and shaping by htb.
> 
> Who knows how to solve?

You use a single ifb0 on the whole (multiqueue) device for ingress ?

What about multiple ifb instead, one per RX queue ?

Alternative is to reduce contention and use a single RX queue.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ