[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGv8E+xwzwZgEsGb6wmG3KFffMRxGKmGBqrueHwXzFwv2WB=_Q@mail.gmail.com>
Date: Wed, 11 Oct 2017 11:46:09 +0600
From: "Sergey K." <simkergey@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev@...r.kernel.org
Subject: Re: High CPU load by native_queued_spin_lock_slowpath
I'm using ifb0 device for outgoing traffic.
I have one bond0 interface with exit to the Internet, and 2 interfaces
eth0 and eth2 to local users.
ifb0 - for shaping Internet traffic from bond0 to eth2 or eth0.
All outgoing traffic to the eth0 and eth2 redirecting to ifb0.
> What about multiple ifb instead, one per RX queue ?
You are offering to redirect traffic from every queue to personal ifb
device? I do not quite understand.
2017-10-10 20:07 GMT+06:00 Eric Dumazet <eric.dumazet@...il.com>:
> On Tue, 2017-10-10 at 18:00 +0600, Sergey K. wrote:
>> I'm using Debian 9(stretch edition) kernel 4.9., hp dl385 g7 server
>> with 32 cpu cores. NIC queues are tied to processor cores. Server is
>> shaping traffic (iproute2 and htb discipline + skbinfo + ipset + ifb)
>> and filtering some rules by iptables.
>>
>> At that moment, when traffic goes up about 1gbit/s cpu is very high
>> loaded. Perf tool tells me that kernel module
>> native_queued_spin_lock_slowpath loading cpu about 40%.
>>
>> After several hours of searching, I found that if I remove the htb
>> discipline from ifb0, the high load goes down.
>> Well, I think that problem with classify and shaping by htb.
>>
>> Who knows how to solve?
>
> You use a single ifb0 on the whole (multiqueue) device for ingress ?
>
> What about multiple ifb instead, one per RX queue ?
>
> Alternative is to reduce contention and use a single RX queue.
>
>
Powered by blists - more mailing lists