lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20090521.122203.193689175.ryov@valinux.co.jp>
Date:	Thu, 21 May 2009 12:22:03 +0900 (JST)
From:	Ryo Tsuruta <ryov@...inux.co.jp>
To:	vgoyal@...hat.com
Cc:	linux-kernel@...r.kernel.org, dm-devel@...hat.com,
	containers@...ts.linux-foundation.org,
	virtualization@...ts.linux-foundation.org,
	xen-devel@...ts.xensource.com
Subject: Re: [PATCH 1/1] dm-ioband: I/O bandwidth controller

Hi Vivek,

> Anyway, how are you taking care of priorities with-in same class. How will
> you make sure that a BE prio 0 request is not hidden behind BE prio 7
> request? Same is true for prio with-in RT class.

I changed io_limit parameter of dm-ioband and ran your test script
again. io_limit is a parameter which determines how many IO requests
can be hold in dm-ioband. Its default value is equal to nr_requests of
the underlying device. I increased the value from 128 to 256 on this
test, because the writer requests IOs more than nr_requests.
The following result shows that dm-ioband does not break the notion of
CFQ priority. I think that the impact of dm-ioband's internal queue on
CFQ is insignificant, because the queue is not too long. The notion of
CFQ priority is preserved within each bandwidth group.

Setting
-------
ioband1: 0 112455000 ioband 8:18 share1 4 256 user weight 512 :40
                                          ^^^ io_limit

Script
------
#!/bin/bash
rm /mnt1/aggressivewriter
sync
echo 3 >  /proc/sys/vm/drop_caches
# launch an hostile writer
ionice -c2 -n7 dd if=/dev/zero of=/mnt1/aggressivewriter bs=4K \
        count=524288 conv=fdatasync &
# Reader
ionice -c2 -n0 dd if=/mnt1/testzerofile1 of=/dev/null &
wait $!
echo "reader finished"

Without dm-ioband
-----------------
First run
2147483648 bytes (2.1 GB) copied, 34.8201 seconds, 61.7 MB/s (Reader)
reader finished
2147483648 bytes (2.1 GB) copied, 68.9099 seconds, 31.2 MB/s (Writer)

Second Run
2147483648 bytes (2.1 GB) copied, 34.8201 seconds, 61.7 MB/s (Reader)
reader finished
2147483648 bytes (2.1 GB) copied, 68.9099 seconds, 31.2 MB/s (Writer)

With dm-ioband
--------------
First run
2147483648 bytes (2.1 GB) copied, 35.852 seconds, 59.9 MB/s  (Reader)
reader finished
2147483648 bytes (2.1 GB) copied, 73.3991 seconds, 29.3 MB/s (Writer)

Second Run
2147483648 bytes (2.1 GB) copied, 36.0273 seconds, 59.6 MB/s (Reader)
reader finished
2147483648 bytes (2.1 GB) copied, 72.8979 seconds, 29.5 MB/s (Writer)

For reference, the previous test results.
http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-04/msg08345.html

Thanks,
Ryo Tsuruta
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ