lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 25 Apr 2007 14:59:15 +0400
From:	Maxim Uvarov <muvarov@...mvista.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [patch] Performance Stats: Kernel patch

Hello Andrew,

I've added taskstats interface to that. Patch is attached.
Please also see my answers bellow.

Andrew Morton wrote:

>(re-added lklml)
>
>  
>
>>Patch makes available to the user the following 
>>thread performance statistics:
>>   * Involuntary Context Switches (task_struct->nivcsw)
>>   * Voluntary Context Switches (task_struct->nvcsw)
>>    
>>
>
>I suppose they might be useful, but I'd be interested in hearing what
>the uses of this information are - why is it valuable?
>
>  
>
We have had a customer request this feature.  I'm not sure exactly why they
want this feature, but in a telecom system it is very common to monitor
statistical information about various parts of the system and watch for
anomalies and trends.  If one of these statistics has a sudden spike or 
has a
gradual trend, the operator knows they need to take action before a problem
occurs or can go back and analyze why a spike occurred.

>>   * Number of system calls (added new counter 
>>     thread_info->sysc_cnt)
>>    
>>
>
>eek.  syscall entry is a really hot hotpath, and, perhaps worse, it's the
>sort of thing which people often measure ;)
>
>I agree that this is a potentially interesting piece of instrumentation,
>but it would need to be _super_ interesting to justify just the single
>instruction overhead, and the cacheline touch.
>
>So, again, please provide justification for this additional overhead.
>
>  
>
Overhead is too small. And it is difficult to measure this difference.
I tried to measure syscall execution time with lat_syscall program
from lmbench package. I run each tests for 2-3 hours  with different
syscalls on qemu UP machine.

Test is:
./lat_syscall -N 1000 null
./lat_syscall -N 1000 read
./lat_syscall -N 1000 write
./lat_syscall -N 1000 stat
./lat_syscall -N 1000 fstat
./lat_syscall -N 1000 open

Result is  that  syscall patch is in the middle of  3 execution
without patch.

without patch:

Simple syscall: 1.0660 microseconds
Simple read: 3.5032 microseconds
Simple write: 2.6576 microseconds
Simple stat: 41.0829 microseconds
Simple fstat: 10.1343 microseconds
Simple open/close: 904.6792 microseconds

Simple syscall: 0.9961 microseconds
Simple read: 3.0027 microseconds
Simple write: 2.1600 microseconds
Simple stat: 47.7678 microseconds
Simple fstat: 10.8242 microseconds
Simple open/close: 905.9916 microseconds

Simple syscall: 1.0035 microseconds
Simple read: 3.0627 microseconds
Simple write: 2.1435 microseconds
Simple stat: 39.1947 microseconds
Simple fstat: 10.2982 microseconds
Simple open/close: 849.1624 microseconds


with patch:

Simple syscall: 1.0013 microseconds
Simple read: 3.6981 microseconds
Simple write: 2.6216 microseconds
Simple stat: 43.5101 microseconds
Simple fstat: 11.1318 microseconds
Simple open/close: 925.4793 microseconds

Because this measurement will be used with taskstats interface
I left it under ifdef CONFIG_TASKSTATS.

Best regards,
Maxim.


View attachment "perf_stat.patch" of type "text/plain" (13461 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ