lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <453E9C18.70803@de.ibm.com>
Date:	Wed, 25 Oct 2006 01:04:56 +0200
From:	Martin Peschke <mp3@...ibm.com>
To:	Jens Axboe <jens.axboe@...cle.com>
CC:	Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org
Subject: Re: [Patch 0/5] I/O statistics through request queues

Jens Axboe wrote:
> On Tue, Oct 24 2006, Martin Peschke wrote:
>> Jens Axboe wrote:
>>>> Our tests indicate that the blktrace approach is fine for performance
>>>> analysis as long as the system to be analysed isn't too busy.
>>>> But once the system faces a consirable amount of non-sequential I/O
>>>> workload, the plenty of blktrace-generated data starts to get painful.
>>> Why haven't you done an analysis and posted it here? I surely cannot fix
>>> what nobody tells me is broken or suboptimal.
>> Fair enough. We have tried out the silly way of blktrace-ing, storing
>> data locally. So, it's probably not worthwhile discussing that.
> 
> You'd probably never want to do local traces for performance analysis.
> It may be handy for other purposes, though.

"...probably not worthwhile discussing that" in the context of performance
analysis.

>>> I have to say it's news to
>>> me that it's performance intensive, tests I did with Alan Brunelle a
>>> year or so ago showed it to be quite low impact.
>> I found some discussions on linux-btrace (Feburary 2006).
>> There is little information on how the alleged 2 percent impact has
>> been determined. Test cases seem to comprise formatting disks ...hmm.
> 
> It may sound strange, but formatting a large drive generates a huge
> flood of block layer events from lots of io queued and merged. So it's
> not a bad benchmark for this type of thing. And it's easy to test :-)

Just wondering to what degree this might resemble I/O workloads run
by customers in their data centers.

>>> You'd be silly to locally store traces, send them out over the network.
>> Will try this next and post complaints, if any, along with numbers.
> 
> Thanks! Also note that you do not need to log every event, just register
> a mask of interesting ones to decrease the output logging rate. We could
> so with some better setup for that though, but at least you should be
> able to filter out some unwanted events.

...and consequently try to scale down relay buffers, reducing the risk of
memory constraints caused by blktrace activation.

>> However, a fast network connection plus a second system for blktrace
>> data processing are serious requirements. Think of servers secured
>> by firewalls. Reading some counters in debugfs, sysfs or whatever
>> might be more appropriate for some one who has noticed an unexpected
>> I/O slowdown and needs directions for further investigation.
> 
> It's hard to make something that will suit everybody. Maintaining some
> counters in sysfs is of course less expensive when your POV is cpu
> cycles.

Counters are also cheaper with regard to memory consumption. Counters
are probably cause less side effects, but are less flexible than
full-blown traces.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ