[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060801193948.GC20108@suse.de>
Date: Tue, 1 Aug 2006 21:39:48 +0200
From: Jens Axboe <axboe@...e.de>
To: Mark Seger <Mark.Seger@...com>
Cc: linux-kernel@...r.kernel.org, Alan Brunelle <Alan.Brunelle@...com>
Subject: Re: Accuracy of disk statistics IO counter
On Tue, Aug 01 2006, Mark Seger wrote:
>
> >>Specifically, I wrote a 1GB file with a blocksize of 1MB, which would
> >>result in 1000 writes at the application level. What I believe then
> >>happens is that each write turns into 8 128KB requests to the driver,
> >>which should result in 8000 actual writes. Toss in metadata operations
> >>and who knows what else and the actual number should be a little
> >>higher. What I've see after repeating the tests a number of times on
> >>2.6.16-27 is numbers ranging from 6800-7000 writes which feels like a
> >>big enough difference to at least point out.
> >>
> >>
> >
> >Install http://brick.kernel.dk/snaps/blktrace-git-20060723022503.tar.gz
> >and blktrace your disk for the duration of the test and compare the io
> >numbers. Requires 2.6.17 or later, though.
> >
> >
> I had problems getting blktrace going and posted a note to
> linux-btrace@...r.kernel.org at Alan Brunelle's suggestion and he also
Documentation issues, sorry about that. It hasn't been updated for the
debugfs change.
> said he'd take a closer look at it himself. On the other hand he
> pointed me to 'stap' and I was able to use it to get the details I was
> looking for - SystemTAP really rocks for this type of analysis! As it
> turns out my assumption about driver blocksize was wrong (sorry about
> that). It turns out that the size of requests from the driver to the
> disk is 160mb and so the I/O count was smaller than I had anticipated.
160KiB, I'm assuming? :-)
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists