lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 11 Jul 2009 21:03:23 +0400
From:	Andrey Borzenkov <arvidjaar@...l.ru>
To:	linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: How to monitor Linux NFS client load?

Recently we have the case of very high latencies on NFS reads as 
reported by application (SAP R/3). NFS server was NetApp FAS; according 
to NetApp statistic, average volume read latencies were in order 10ms, 
while SAP stats gave 30-50ms. Systems were interconnected by dedicated 
1Gb/s Cisco switches (3750G) with ca. 30% max load on interfaces.

On advice of my colleague we changed sunrpc.tcp_slot_table_entries from 
default 16 to 128 which seemed to make situation much better - without 
changing load pattern of filer in any visible way.

Now, I can understand, why we observed much higher latency on system and 
why changing (what effectively is) queue depth helped. But I am totally 
frustrated that there does not appear to be *any* possibility to detect 
this situation on Linux side and to get a real numbers of real NFS IO 
latencies or number of requests waiting to be executed (and I do not 
even dream about per-mount point stats).

I am grateful for any hints how can we monitor Linux NFS client and get 
real-life numbers of what happens inside. Thank you!

-andrey

Download attachment "signature.asc " of type "application/pgp-signature" (198 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ