lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 5 Mar 2015 08:31:11 +1100
From:	NeilBrown <neilb@...e.de>
To:	Dave Jones <davej@...emonkey.org.uk>
Cc:	Linux Kernel <linux-kernel@...r.kernel.org>,
	linux RAID <linux-raid@...r.kernel.org>
Subject: Re: RAID0 & diskstats.

On Wed, 4 Mar 2015 16:09:04 -0500 Dave Jones <davej@...emonkey.org.uk> wrote:

> Hi Neil,
>    According to Documentation/iostats.txt, the 9th column of
> /proc/diskstats (and its modern replacement in sysfs) should go to 0
> as IO completes.
> 
> I assembled a RAID0 stripe using two SSD's, and saw this..
> 
> # mdadm --assemble /dev/md0
> mdadm: /dev/md0 has been started with 2 drives.
> # cat /sys/block/md0/stat
>      167        0     5656        0        5        0     4096        0     172     3408   582825
> # cat /sys/block/md0/stat
>      167        0     5656        0        5        0     4096        0     172   231469 39809317
> 
> The 10th & 11th fields constantly increase, as field 9 remains non-zero.
> If I mount and umount a filesystem on that volume, it works as expected,
> but the 9th 'IOs inflight' field continues to rise and never decreases even
> though the IO has obviously completed.
> 
> # umount /mnt/ssd
> # cat /sys/block/md0/stat
>      167        0     5656        0        9        0     4225        0     176   571384 98278615
> 
> The underlying disks have their respective stats entries behaving as
> expected, it only seems to affect the upper md layer.
> 
> Some missing accounting somewhere in md ?
> 
> (Only tested on 4.0rc2 so far, and only on RAID0)
> 
> 	Dave

blockdev stats often aren't really a good match for md/raid...

"in_flight" assumes a queue, and raid0 doesn't have one.  It just redirects
each request to the relevant device and lets the device handle it.

The only useful thing we could do here is make that value always zero. 


I guess I need to add a "generic_end_io_acct()" call to md.c somewhere.
For raid1/5 there is probably somewhere sensible to put it.
For raid0/linear, it probably goes immediately after generic_start_io_acct().

Patches welcome :-)

NeilBrown

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ