[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <453F3D4E.4020608@de.ibm.com>
Date: Wed, 25 Oct 2006 12:32:46 +0200
From: Martin Peschke <mp3@...ibm.com>
To: Jens Axboe <jens.axboe@...cle.com>
CC: Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org
Subject: Re: [Patch 0/5] I/O statistics through request queues
Jens Axboe wrote:
>>> Thanks! Also note that you do not need to log every event, just register
>>> a mask of interesting ones to decrease the output logging rate. We could
>>> so with some better setup for that though, but at least you should be
>>> able to filter out some unwanted events.
>> ...and consequently try to scale down relay buffers, reducing the risk of
>> memory constraints caused by blktrace activation.
>
> Pretty pointless, unless you are tracing lots of disks. 4x128kb gone
> wont be a showstopper for anyone.
per (online) CPU and device?
>>>> However, a fast network connection plus a second system for blktrace
>>>> data processing are serious requirements. Think of servers secured
>>>> by firewalls. Reading some counters in debugfs, sysfs or whatever
>>>> might be more appropriate for some one who has noticed an unexpected
>>>> I/O slowdown and needs directions for further investigation.
>>> It's hard to make something that will suit everybody. Maintaining some
>>> counters in sysfs is of course less expensive when your POV is cpu
>>> cycles.
>> Counters are also cheaper with regard to memory consumption. Counters
>> are probably cause less side effects, but are less flexible than
>> full-blown traces.
>
> And the counters are special cases and extremely inflexible.
Well, I disagree with "extremely".
These statistics have attributes that allow users to adjust data
aggregation, e.g. to retain more detail in a histogram by adding
buckets.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists