lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Dec 2006 01:09:01 +0100
From:	"roland" <devzero@....de>
To:	"Andrew Morton" <akpm@...l.org>, "Jay Lan" <jlan@...r.sgi.com>
Cc:	"Fengguang Wu" <fengguang.wu@...il.com>,
	<linux-kernel@...r.kernel.org>, <lserinol@...il.com>
Subject: Re: I/O statistics per process

hi!

didn`t discover that there is anything new about this (andrew? jay?) or if 
some other person sent a patch , but i`d like to report that i came across a 
really nice tool which would immediately benefit from per-process i/o 
statistics feature.

please - this mail is not meant to clamor for such feature - it`s just to 
show up some more benefits if this feature would exist.

vmktree at http://vmktree.org/ is some really nice monitoring tool being 
able to graph performance statistics for a host running vmware virtual 
machines (closed source - evil -  i know ;) - and it can break that 
statistics down to each virtual machine.

what`s hurting mostly here is that you have no clue how much I/O each of 
those virtual machine is generating - you may give sort of a "guess" that a 
machine with 100% idle cpu will not generate any significant amount of I/O, 
but vmktree would be so much more powerful with a per-process I/O statistic, 
since you can use your systems more efficient, because you would know about 
you I/O hogs, too.

having the ability to take such information from /proc would be a real 
killer feature - good for troubleshooting and also good for getting 
important statistics!

roland

ps:
this is another person, desperately seeking for a tool providing that 
information:  http://www.tek-tips.com/viewthread.cfm?qid=1284288&page=4




----- Original Message ----- 
From: "Andrew Morton" <akpm@...l.org>
To: "Jay Lan" <jlan@...r.sgi.com>
Cc: "roland" <devzero@....de>; "Fengguang Wu" <fengguang.wu@...il.com>;
<linux-kernel@...r.kernel.org>; <lserinol@...il.com>
Sent: Thursday, September 28, 2006 11:14 PM
Subject: Re: I/O statistics per process


> On Thu, 28 Sep 2006 15:00:17 -0700
> Jay Lan <jlan@...r.sgi.com> wrote:
>
>> >>>    in __set_page_dirty_[no]buffers().) (But that ends up being wrong
>> >>> if
>> >>>    someone truncates the file before it got written)
>> >>>
>> >>> - it doesn't account for file readahead (although it easily could)
>> >>>
>> >>> - it doesn't account for pagefault-initiated readahead (it could)
>> >>>
>>
>> Mmm, i am not a true FS I/O person. The data collection patches i
>> submitted in Nov 2004 was the code i inherited and has been
>> used in production system by our CSA customers. We lost a bit in
>> contents and accuracy when CSA was ported from IRIX to Linux. I am
>> sure there is room for improvement without much overhead.
>
> OK, well it sounds like we're free to define these in any way we like.  So
> we actually get to make them mean something useful - how nice.
>
> I hereby declare: "approxmiately equal to the number of filesystem bytes
> which this task has caused to occur, or which shall occur in the near
> future".
>
>> Maybe FS
>> I/O guys can chip in?
>
> I used to be one of them.  I can take a look at doing this.  Given the
> lack
> of any applciation to read the darn numbers out I guess I'll need to
> expose
> them in /proc for now.  Yes, that monitoring patch (and an application to
> read from it!) would be appreciated, thanks.
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ