lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Oct 2008 19:12:48 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Miquel van Smoorenburg <mikevs@...all.net>
Cc:	Greg KH <greg@...ah.com>, linux-kernel@...r.kernel.org
Subject: Re: disk statistics issue in 2.6.27

On Sun, Oct 19 2008, Miquel van Smoorenburg wrote:
> I just upgraded one of our servers in the nntp cluster to 2.6.27.1 -
> most of the others are running 2.6.26.something.
> 
> I noticed that the "iostat -k -x 2" output does't make any sense.
> The number of reads/sec and number of writes/sec are about what I
> would expect, and so are the other fields, but rkB/sec and wkB/sec
> are  completely off-scale: gigabytes read/written per second.
> 
> To be sure it wasn't a bug in iostat I wrote a small perl script
> to process /sys/block/sda/stats, and it shows the same problem.
> 
> Note that the stats in /proc/diskstats and /sys/block/<dev>/stats
> are the same.
> 
> I've tried both the cfq and deadline I/O scheduler - no difference.
> 
> $ perl mystat.pl 
> Device:   r/s   w/s     rkB/s     wkB/s
> sda       141    53   2301120   1795200
> 
> Device:   r/s   w/s     rkB/s     wkB/s
> sda       145     7   2366400   3394560
> 
> I compiled a 2.6.26.6 kernel with the exact same .config and it
> doesn't show the problem.
> 
> I've been staring at include/linux/genhd.h and block/genhd.c
> for a while but I just don't see it.
> 
> The mystat.pl perl script and my .config are below.
> The machine is a dual xeon 2.2 Ghz, 32 bit, 4 GB mem, /dev/sda
> is a 3-scsi-disk RAID5 array on an adaptec 2005S controller.
> 
> Any idea what could be causing this ?

Weird, I cannot reproduce this at all, iostat works fine for me in .26,
.27 and current -git as well. So it's just a plain SCSI drive from
linux, no software raid or dm?

Are the reported values in iostat any sort of multiple of the real
throughtput, or is is just insanely large?

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ