lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b82afbaf-c548-5b7e-8853-12c3e6a8f757@kernel.org>
Date: Sun, 27 Aug 2023 10:51:59 -0600
From: David Ahern <dsahern@...nel.org>
To: Martin Zaharinov <micron10@...il.com>, netdev <netdev@...r.kernel.org>,
 Eric Dumazet <edumazet@...gle.com>
Subject: Re: High Cpu load when run smartdns : __ipv6_dev_get_saddr

On 8/27/23 7:20 AM, Martin Zaharinov wrote:
> Hi Eric 
> 
> 
> i need you help to find is this bug or no.
> 
> I talk with smartdns team and try to research in his code but for the moment not found ..
> 
> test system have 5k ppp users on pppoe device
> 
> after run smartdns  
> 
> service got to 100% load 
> 
> in normal case when run other 2 type of dns server (isc bind or knot ) all is fine .
> 
> but when run smartdns  see perf : 
> 
> 
>  PerfTop:    4223 irqs/sec  kernel:96.9%  exact: 100.0% lost: 0/0 drop: 0/0 [4000Hz cycles],  (target_pid: 1208268)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> 
>     28.48%  [kernel]        [k] __ipv6_dev_get_saddr
>     12.31%  [kernel]        [k] l3mdev_master_ifindex_rcu
>      6.63%  [pppoe]         [k] pppoe_rcv
>      3.82%  [kernel]        [k] ipv6_dev_get_saddr
>      2.07%  [kernel]        [k] __dev_queue_xmit

Can you post stack traces for the top 5 symbols?

What is the packet rate when the above is taken?

4,223 irqs/sec is not much of a load; can you add some details on the
hardware and networking setup (e.g., l3mdev reference suggests you are
using VRF)?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ