lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5db8d6bc.9fe1.1932a2f5ce9.Coremail.00107082@163.com>
Date: Thu, 14 Nov 2024 18:19:27 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Paolo Abeni" <pabeni@...hat.com>
Cc: davem@...emloft.net, dsahern@...nel.org, edumazet@...gle.com, 
	kuba@...nel.org, netdev@...r.kernel.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] net/ipv4/proc: Avoid usage for seq_printf() when
 reading /proc/net/snmp


At 2024-11-14 17:30:34, "Paolo Abeni" <pabeni@...hat.com> wrote:
>On 11/11/24 05:56, David Wang wrote:
>> seq_printf() is costy, when reading /proc/net/snmp, profiling indicates
>> seq_printf() takes more than 50% samples of snmp_seq_show():
>> 	snmp_seq_show(97.751% 158722/162373)
>> 	    snmp_seq_show_tcp_udp.isra.0(40.017% 63515/158722)
>> 		seq_printf(83.451% 53004/63515)
>> 		seq_write(1.170% 743/63515)
>> 		_find_next_bit(0.727% 462/63515)
>> 		...
>> 	    seq_printf(24.762% 39303/158722)
>> 	    snmp_seq_show_ipstats.isra.0(21.487% 34104/158722)
>> 		seq_printf(85.788% 29257/34104)
>> 		_find_next_bit(0.331% 113/34104)
>> 		seq_write(0.235% 80/34104)
>> 		...
>> 	    icmpmsg_put(7.235% 11483/158722)
>> 		seq_printf(41.714% 4790/11483)
>> 		seq_write(2.630% 302/11483)
>> 		...
>> Time for a million rounds of stress reading /proc/net/snmp:
>> 	real	0m24.323s
>> 	user	0m0.293s
>> 	sys	0m23.679s
>> On average, reading /proc/net/snmp takes 0.023ms.
>> With this patch, extra costs of seq_printf() is avoided, and a million
>> rounds of reading /proc/net/snmp now takes only ~15.853s:
>> 	real	0m16.386s
>> 	user	0m0.280s
>> 	sys	0m15.853s
>> On average, one read takes 0.015ms, a ~40% improvement.
>> 
>> Signed-off-by: David Wang <00107082@....com>
>
>If the user space is really concerned with snmp access performances, I
>think such information should be exposed via netlink.
>
>Still the goal of the optimization looks doubtful. The total number of
>mibs domain is constant and limited (differently from the network
>devices number that in specific setup can grow a lot). Stats polling
>should be a low frequency operation. Why you need to optimize it?

Well, one thing I think worth mention, optimize /proc entries can help
increase sample frequency, hence more accurate rate analysis,
 for monitoring tools with a fixed/limited cpu quota.

And for /proc/net/*, the optimization would be amplified when considering network namespaces.

I think it worth to optimize.   

>
>I don't think we should accept this change, too. And a solid explanation
>should be need to introduce a netlink MIB interface.
>
>> ---
>>  net/ipv4/proc.c | 116 ++++++++++++++++++++++++++++--------------------
>
>FTR you missed mptcp.
>
>/P

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ