[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061025051238.GO4281@kernel.dk>
Date: Wed, 25 Oct 2006 07:12:39 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Martin Peschke <mp3@...ibm.com>
Cc: Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org
Subject: Re: [Patch 0/5] I/O statistics through request queues
On Wed, Oct 25 2006, Martin Peschke wrote:
> >>>I have to say it's news to
> >>>me that it's performance intensive, tests I did with Alan Brunelle a
> >>>year or so ago showed it to be quite low impact.
> >>I found some discussions on linux-btrace (Feburary 2006).
> >>There is little information on how the alleged 2 percent impact has
> >>been determined. Test cases seem to comprise formatting disks ...hmm.
> >
> >It may sound strange, but formatting a large drive generates a huge
> >flood of block layer events from lots of io queued and merged. So it's
> >not a bad benchmark for this type of thing. And it's easy to test :-)
>
> Just wondering to what degree this might resemble I/O workloads run
> by customers in their data centers.
It wont of course, the point is to generate a flood of events to put as
much pressure on blktrace logging as possible. Dirtying tons of data
does that.
> >>>You'd be silly to locally store traces, send them out over the network.
> >>Will try this next and post complaints, if any, along with numbers.
> >
> >Thanks! Also note that you do not need to log every event, just register
> >a mask of interesting ones to decrease the output logging rate. We could
> >so with some better setup for that though, but at least you should be
> >able to filter out some unwanted events.
>
> ...and consequently try to scale down relay buffers, reducing the risk of
> memory constraints caused by blktrace activation.
Pretty pointless, unless you are tracing lots of disks. 4x128kb gone
wont be a showstopper for anyone.
> >>However, a fast network connection plus a second system for blktrace
> >>data processing are serious requirements. Think of servers secured
> >>by firewalls. Reading some counters in debugfs, sysfs or whatever
> >>might be more appropriate for some one who has noticed an unexpected
> >>I/O slowdown and needs directions for further investigation.
> >
> >It's hard to make something that will suit everybody. Maintaining some
> >counters in sysfs is of course less expensive when your POV is cpu
> >cycles.
>
> Counters are also cheaper with regard to memory consumption. Counters
> are probably cause less side effects, but are less flexible than
> full-blown traces.
And the counters are special cases and extremely inflexible.
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists